hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Carlos Valiente (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-5257) Export namenode/datanode functionality through a pluggable RPC layer
Date Fri, 13 Feb 2009 18:43:00 GMT
Export namenode/datanode functionality through a pluggable RPC layer
--------------------------------------------------------------------

                 Key: HADOOP-5257
                 URL: https://issues.apache.org/jira/browse/HADOOP-5257
             Project: Hadoop Core
          Issue Type: New Feature
          Components: dfs
            Reporter: Carlos Valiente
            Priority: Minor


Adding support for pluggable components would allow exporting DFS functionallity using arbitrary
protocols, like Thirft or Protocol Buffers. I'm opening this issue on Dhruba's suggestion
in HADOOP-4707.

Plug-in implementations would extend this base class:

{code}abstract class Plugin {

    public abstract datanodeStarted(DataNode datanode);

    public abstract datanodeStopping();

    public abstract namenodeStarted(NameNode namenode);

    public abstract namenodeStopping();
}{code}

Name node instances would then start the plug-ins according to a configuration object, and
would also shut them down when the node goes down:

{code}public class NameNode {

    // [..]

    private void initialize(Configuration conf)
        // [...]
        for (Plugin p: PluginManager.loadPlugins(conf))
          p.namenodeStarted(this);
    }

    // [..]

    public void stop() {
        if (stopRequested)
            return;
        stopRequested = true;
        for (Plugin p: plugins) 
            p.namenodeStopping();
        // [..]
    }

    // [..]
}{code}

Data nodes would do a similar thing in {{DataNode.startDatanode()}} and {{DataNode.shutdown}}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message