ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Fernandez (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMBARI-7842) Ambari to manage tarballs on HDFS
Date Tue, 28 Oct 2014 21:54:34 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Alejandro Fernandez updated AMBARI-7842:
----------------------------------------
    Description: 
With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain
location in HDFS.
The tarballs/jars no longer have a version number (either component version or HDP stack version
+ build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g.,
2.2.0.0-999).

{code}
/hdp/apps/$(hdp-stack-version)
  |---- mapreduce/mapreduce.tar.gz
  |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it
is a symlink to a versioned file, so HDFS needs to follow the link)
  |---- tez/tez.tar.gz
  |---- pig/pig.tar.gz
  |---- hive/hive.tar.gz
  |---- sqoop/sqoop.tar.gz
{code}

Furthermore, the folders created in HDFS need to have a permission of 0555, while files need
0444.
The owner should be hdfs, and the group should be hadoop.

  was:
With HDP 2.2, MapReduce needs versioned app tarballs on HDFS.

Tez always had a tarball on HDFS that is not-versioned. Oozie, WebHCat also had always published
Pig and Hive tarballs on HDFS - with HDP 2.2 they also need to be versioned. Slider also has
its own tarballs that need to be versioned and managed.
We need to consolidate this into a common versioned layout of tarballs on HDFS.

Here's the example proposal,
{code}
/hdp/apps/$(hdp-stack-version)
  |---- mapreduce/mapreduce-$(component-version)-$(hdp-stack-version).tar.gz
  |---- tez/tez-$(component-version)-$(hdp-stack-version).tar.gz
  |---- pig/pig-$(component-version)-$(hdp-stack-version).tar.gz
  |---- hive/hive-$(component-version)-$(hdp-stack-version).tar.gz
{code}


> Ambari to manage tarballs on HDFS
> ---------------------------------
>
>                 Key: AMBARI-7842
>                 URL: https://issues.apache.org/jira/browse/AMBARI-7842
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Alejandro Fernandez
>            Priority: Blocker
>         Attachments: ambari_170_versioned_rpms.pptx
>
>
> With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a
certain location in HDFS.
> The tarballs/jars no longer have a version number (either component version or HDP stack
version + build) in the name), but the destination folder in HDFS does contain the HDP Version
(e.g., 2.2.0.0-999).
> {code}
> /hdp/apps/$(hdp-stack-version)
>   |---- mapreduce/mapreduce.tar.gz
>   |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system,
it is a symlink to a versioned file, so HDFS needs to follow the link)
>   |---- tez/tez.tar.gz
>   |---- pig/pig.tar.gz
>   |---- hive/hive.tar.gz
>   |---- sqoop/sqoop.tar.gz
> {code}
> Furthermore, the folders created in HDFS need to have a permission of 0555, while files
need 0444.
> The owner should be hdfs, and the group should be hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message