Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 43428 invoked from network); 24 Mar 2009 20:18:14 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 24 Mar 2009 20:18:14 -0000 Received: (qmail 27784 invoked by uid 500); 24 Mar 2009 20:18:14 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 27722 invoked by uid 500); 24 Mar 2009 20:18:13 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 27712 invoked by uid 99); 24 Mar 2009 20:18:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Mar 2009 20:18:13 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Mar 2009 20:18:11 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id A70B4234C044 for ; Tue, 24 Mar 2009 13:17:50 -0700 (PDT) Message-ID: <917316113.1237925870683.JavaMail.jira@brutus> Date: Tue, 24 Mar 2009 13:17:50 -0700 (PDT) From: "Carlos Valiente (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-5257) Export namenode/datanode functionality through a pluggable RPC layer In-Reply-To: <466227955.1234550580115.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12688845#action_12688845 ] Carlos Valiente commented on HADOOP-5257: ----------------------------------------- bq.1. which classloader is being used to load classes? Classes are loaded by {{Configuration.getInstances}}, which ends up calling {{Configuration.getClassByName}}, which uses the instance field {{Configuration.classloader}}. That field is initialised by this code fragment: {code} private ClassLoader classLoader; { classLoader = Thread.currentThread().getContextClassLoader(); if (classLoader == null) { classLoader = Configuration.class.getClassLoader(); } } {code} bq.2. If parsing a string value to a list is useful, this should really go into Configuration, not the plugin classes, as that is one places to implement string trim policy, write the unit tests, etc. I'm not sure I follow you on this point, Steve: Class name parsing is delegated to {{Configuration.getClasses}} already (which delegates the splitting to {{StringUtils.getStrings}}, it seems). bq.3. I like the tests, add one to try loading a class that isn't there {{org.apache.hadoop.conf.TestGetInstances}} already does that: {code} try { conf.setStrings("some.classes", SampleClass.class.getName(), AnotherClass.class.getName(), "no.such.Class"); conf.getInstances("some.classes", SampleInterface.class); fail("no.such.Class does not exist"); } catch (RuntimeException e) {} {code} Do you think it would be better to write it in a different way? > Export namenode/datanode functionality through a pluggable RPC layer > -------------------------------------------------------------------- > > Key: HADOOP-5257 > URL: https://issues.apache.org/jira/browse/HADOOP-5257 > Project: Hadoop Core > Issue Type: New Feature > Components: dfs > Reporter: Carlos Valiente > Priority: Minor > Attachments: HADOOP-5257-v2.patch, HADOOP-5257-v3.patch, HADOOP-5257-v4.patch, HADOOP-5257-v5.patch, HADOOP-5257-v6.patch, HADOOP-5257-v7.patch, HADOOP-5257-v8.patch, HADOOP-5257.patch > > > Adding support for pluggable components would allow exporting DFS functionallity using arbitrary protocols, like Thirft or Protocol Buffers. I'm opening this issue on Dhruba's suggestion in HADOOP-4707. > Plug-in implementations would extend this base class: > {code}abstract class Plugin { > public abstract datanodeStarted(DataNode datanode); > public abstract datanodeStopping(); > public abstract namenodeStarted(NameNode namenode); > public abstract namenodeStopping(); > }{code} > Name node instances would then start the plug-ins according to a configuration object, and would also shut them down when the node goes down: > {code}public class NameNode { > // [..] > private void initialize(Configuration conf) > // [...] > for (Plugin p: PluginManager.loadPlugins(conf)) > p.namenodeStarted(this); > } > // [..] > public void stop() { > if (stopRequested) > return; > stopRequested = true; > for (Plugin p: plugins) > p.namenodeStopping(); > // [..] > } > // [..] > }{code} > Data nodes would do a similar thing in {{DataNode.startDatanode()}} and {{DataNode.shutdown}} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.