Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DFAB017B3A for ; Tue, 28 Oct 2014 21:54:34 +0000 (UTC) Received: (qmail 48825 invoked by uid 500); 28 Oct 2014 21:54:34 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 48798 invoked by uid 500); 28 Oct 2014 21:54:34 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 48783 invoked by uid 99); 28 Oct 2014 21:54:34 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Oct 2014 21:54:34 +0000 Date: Tue, 28 Oct 2014 21:54:34 +0000 (UTC) From: "Alejandro Fernandez (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (AMBARI-7842) Ambari to manage tarballs on HDFS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/AMBARI-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-7842: ---------------------------------------- Description: With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain location in HDFS. The tarballs/jars no longer have a version number (either component version or HDP stack version + build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g., 2.2.0.0-999). {code} /hdp/apps/$(hdp-stack-version) |---- mapreduce/mapreduce.tar.gz |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it is a symlink to a versioned file, so HDFS needs to follow the link) |---- tez/tez.tar.gz |---- pig/pig.tar.gz |---- hive/hive.tar.gz |---- sqoop/sqoop.tar.gz {code} Furthermore, the folders created in HDFS need to have a permission of 0555, while files need 0444. The owner should be hdfs, and the group should be hadoop. was: With HDP 2.2, MapReduce needs versioned app tarballs on HDFS. Tez always had a tarball on HDFS that is not-versioned. Oozie, WebHCat also had always published Pig and Hive tarballs on HDFS - with HDP 2.2 they also need to be versioned. Slider also has its own tarballs that need to be versioned and managed. We need to consolidate this into a common versioned layout of tarballs on HDFS. Here's the example proposal, {code} /hdp/apps/$(hdp-stack-version) |---- mapreduce/mapreduce-$(component-version)-$(hdp-stack-version).tar.gz |---- tez/tez-$(component-version)-$(hdp-stack-version).tar.gz |---- pig/pig-$(component-version)-$(hdp-stack-version).tar.gz |---- hive/hive-$(component-version)-$(hdp-stack-version).tar.gz {code} > Ambari to manage tarballs on HDFS > --------------------------------- > > Key: AMBARI-7842 > URL: https://issues.apache.org/jira/browse/AMBARI-7842 > Project: Ambari > Issue Type: Bug > Reporter: Alejandro Fernandez > Priority: Blocker > Attachments: ambari_170_versioned_rpms.pptx > > > With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain location in HDFS. > The tarballs/jars no longer have a version number (either component version or HDP stack version + build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g., 2.2.0.0-999). > {code} > /hdp/apps/$(hdp-stack-version) > |---- mapreduce/mapreduce.tar.gz > |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it is a symlink to a versioned file, so HDFS needs to follow the link) > |---- tez/tez.tar.gz > |---- pig/pig.tar.gz > |---- hive/hive.tar.gz > |---- sqoop/sqoop.tar.gz > {code} > Furthermore, the folders created in HDFS need to have a permission of 0555, while files need 0444. > The owner should be hdfs, and the group should be hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332)