Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C31A810C40 for ; Thu, 15 Jan 2015 17:06:33 +0000 (UTC) Received: (qmail 93637 invoked by uid 500); 15 Jan 2015 17:06:35 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 93588 invoked by uid 500); 15 Jan 2015 17:06:35 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 93570 invoked by uid 99); 15 Jan 2015 17:06:34 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 Jan 2015 17:06:34 +0000 Date: Thu, 15 Jan 2015 17:06:34 +0000 (UTC) From: "Allen Wittenauer (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HADOOP-11485) Pluggable shell integration MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278931#comment-14278931 ] Allen Wittenauer edited comment on HADOOP-11485 at 1/15/15 5:05 PM: -------------------------------------------------------------------- I'm specifically thinking of doing something like this: * directory off of HADOOP_CONF_DIR or HADOOP_LIBEXEC_DIR that contains shell fragments (think /etc/profile.d) * initializer that uses hadoop_add_colonpath to add a prefix to a var (HADOOP_SHELLFRAG or something) * each shell frag could define the following functions: {code} _(frag)_hadoop_classpath _(frag)_hadoop_init _(frag)_hadoop_finalizer {code} i.e., the current hadoop_add_to_classpath_yarn would get moved out of hadoop-functions.sh into this fragment file and renamed to _yarn_hadoop_classpath. A few other notes: * HADOOP_CONF_DIR would need to get moved to first in to last in+prepend. It must *always* be first in the classpath. We don't want 3rd parties coming in front. * We could provide no guarantees, really, as to when a jar appears in the classpath using this method. So this wouldn't be a way to override classes. * Currently, the only way to manage $\@ via this fragment is going to be on source'ing it. So sourcing will happen after we do our normal shell param processing. So hadoop-common options will need to come first, 3rd party after, followed by the appropriate shell subcommand. e.g., yarn --conf foo --hbaseconf bar jar myhbase.jar * We'll likely need to banish all of the 'extraneous' shell env vars introduced in 2.x and effectively deprecated as part of HADOOP-9902. was (Author: aw): I'm specifically thinking of doing something like this: * directory off of HADOOP_CONF_DIR or HADOOP_LIBEXEC_DIR that contains shell fragments (think /etc/profile.d) * initializer that uses hadoop_add_colonpath to add a prefix to a var (HADOOP_SHELLFRAG or something) * each shell frag could define the following functions: {code} _(frag)_hadoop_classpath _(frag)_hadoop_init _(frag)_hadoop_finalizer {code} i.e., the current hadoop_add_to_classpath_yarn would get moved out of hadoop-functions.sh into this fragment file and renamed to _yarn_hadoop_classpath. A few other notes: * HADOOP_CONF_DIR would need to get moved to first in to last in+prepend. It must *always* be first in the classpath. We don't want 3rd parties coming in front. * We could provide no guarantees, really, as to when a jar appears in the classpath using this method. So this wouldn't be a way to override classes. * Currently, the only way to manage $\@ via this fragment is going to be on source'ing it. So sourcing will happen after we do our normal shell param processing. So hadoop-common options will need to come first, 3rd party after, followed by the appropriate shell subcommand. e.g., yarn --conf foo --hbaseconf bar jar myhbase.jar > Pluggable shell integration > --------------------------- > > Key: HADOOP-11485 > URL: https://issues.apache.org/jira/browse/HADOOP-11485 > Project: Hadoop Common > Issue Type: New Feature > Reporter: Allen Wittenauer > > It would be useful to provide a way for core and non-core Hadoop components to plug into the shell infrastructure. This would allow us to pull the HDFS, MapReduce, and YARN shell functions out of hadoop-functions.sh. Additionally, it should let 3rd parties such as HBase influence things like classpaths at runtime. -- This message was sent by Atlassian JIRA (v6.3.4#6332)