ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <>
Subject [jira] [Commented] (AMBARI-7842) Ambari to manage tarballs on HDFS
Date Wed, 29 Oct 2014 01:03:31 GMT


Hadoop QA commented on AMBARI-7842:

{color:green}+1 overall{color}.  Here are the results of testing the latest attachment
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 2 new or modified
test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in ambari-server.

Test results:
Console output:

This message is automatically generated.

> Ambari to manage tarballs on HDFS
> ---------------------------------
>                 Key: AMBARI-7842
>                 URL:
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Alejandro Fernandez
>            Assignee: Alejandro Fernandez
>            Priority: Blocker
>         Attachments: AMBARI-7842.patch, AMBARI-7842_branch-1.7.0.patch, ambari_170_versioned_rpms.pptx
> With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a
certain location in HDFS.
> The tarballs/jars no longer have a version number (either component version or HDP stack
version + build) in the name), but the destination folder in HDFS does contain the HDP Version
> {code}
> /hdp/apps/$(hdp-stack-version)
>   |---- mapreduce/mapreduce.tar.gz
>   |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system,
it is a symlink to a versioned file, so HDFS needs to follow the link)
>   |---- tez/tez.tar.gz
>   |---- pig/pig.tar.gz
>   |---- hive/hive.tar.gz
>   |---- sqoop/sqoop.tar.gz
> {code}
> Furthermore, the folders created in HDFS need to have a permission of 0555, while files
need 0444.
> The owner should be hdfs, and the group should be hadoop.

This message was sent by Atlassian JIRA

View raw message