hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Carlos Valiente (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5257) Export namenode/datanode functionality through a pluggable RPC layer
Date Tue, 24 Mar 2009 20:17:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12688845#action_12688845
] 

Carlos Valiente commented on HADOOP-5257:
-----------------------------------------

bq.1. which classloader is being used to load classes?

Classes are loaded by {{Configuration.getInstances}}, which ends up calling  {{Configuration.getClassByName}},
which uses the instance field {{Configuration.classloader}}. That field is initialised by
this code fragment:

{code}
  private ClassLoader classLoader;
  {
    classLoader = Thread.currentThread().getContextClassLoader();
    if (classLoader == null) {
      classLoader = Configuration.class.getClassLoader();
    }
  }
{code}

bq.2. If parsing a string value to a list is useful, this should really go into Configuration,
not the plugin classes, as that is one places to implement string trim policy, write the unit
tests, etc.

I'm not sure I follow you on this point, Steve: Class name parsing is delegated to {{Configuration.getClasses}}
already (which delegates the splitting to {{StringUtils.getStrings}}, it seems).

bq.3. I like the tests, add one to try loading a class that isn't there

{{org.apache.hadoop.conf.TestGetInstances}} already does that:

{code}
    try {
      conf.setStrings("some.classes",
          SampleClass.class.getName(), AnotherClass.class.getName(),
          "no.such.Class");
      conf.getInstances("some.classes", SampleInterface.class);
      fail("no.such.Class does not exist");
    } catch (RuntimeException e) {}
{code}

Do you think it would be better to write it in a different way?

> Export namenode/datanode functionality through a pluggable RPC layer
> --------------------------------------------------------------------
>
>                 Key: HADOOP-5257
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5257
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Carlos Valiente
>            Priority: Minor
>         Attachments: HADOOP-5257-v2.patch, HADOOP-5257-v3.patch, HADOOP-5257-v4.patch,
HADOOP-5257-v5.patch, HADOOP-5257-v6.patch, HADOOP-5257-v7.patch, HADOOP-5257-v8.patch, HADOOP-5257.patch
>
>
> Adding support for pluggable components would allow exporting DFS functionallity using
arbitrary protocols, like Thirft or Protocol Buffers. I'm opening this issue on Dhruba's suggestion
in HADOOP-4707.
> Plug-in implementations would extend this base class:
> {code}abstract class Plugin {
>     public abstract datanodeStarted(DataNode datanode);
>     public abstract datanodeStopping();
>     public abstract namenodeStarted(NameNode namenode);
>     public abstract namenodeStopping();
> }{code}
> Name node instances would then start the plug-ins according to a configuration object,
and would also shut them down when the node goes down:
> {code}public class NameNode {
>     // [..]
>     private void initialize(Configuration conf)
>         // [...]
>         for (Plugin p: PluginManager.loadPlugins(conf))
>           p.namenodeStarted(this);
>     }
>     // [..]
>     public void stop() {
>         if (stopRequested)
>             return;
>         stopRequested = true;
>         for (Plugin p: plugins) 
>             p.namenodeStopping();
>         // [..]
>     }
>     // [..]
> }{code}
> Data nodes would do a similar thing in {{DataNode.startDatanode()}} and {{DataNode.shutdown}}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message