ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Fernandez (JIRA)" <>
Subject [jira] [Commented] (AMBARI-8323) WebHCat server needs Pig client to copy tarball to HDFS
Date Thu, 13 Nov 2014 23:39:34 GMT


Alejandro Fernandez commented on AMBARI-8323:

I verified the patch works,

I created a cluster through the Install Wizard with the following topology,
||Host  ||  Datanode || NameManager || Client ||
|Host 1|  Y               | Y                        |            |
|Host 2|  Y               |                           | Y         |
|Host 3|  Y               |                           | Y         |  

And it correctly installed the Pig client on Host 1, which is the Webhcat server.

shows Host 1
[] shows
Host 1, Host 2, and Host 3.

Further, the WebHCat Server Start log shows,
2014-11-13 23:04:31,354 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/']
{'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir':
2014-11-13 23:04:31,355 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/']
{'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat',
'try_sleep': 0}
2014-11-13 23:04:34,535 - ls: `hdfs:///hdp/apps/': No such file
or directory
2014-11-13 23:04:34,535 - HdfsDirectory['hdfs:///hdp/apps/'] {'security_enabled':
False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local':
'', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action':
2014-11-13 23:04:34,537 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop
| grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/ && hadoop --config
/etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/ && hadoop --config
/etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/'] {'not_if': "su - hdfs
-c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf
fs -ls hdfs:///hdp/apps/'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2014-11-13 23:04:47,306 - CopyFromLocal['/usr/hdp/current/pig-client/pig.tar.gz'] {'hadoop_bin_dir':
'/usr/hdp/current/hadoop-client/bin', 'group': 'hadoop', 'hdfs_user': 'hdfs', 'owner': 'hdfs',
'kinnit_if_needed': '', 'dest_dir': 'hdfs:///hdp/apps/', 'hadoop_conf_dir':
'/etc/hadoop/conf', 'mode': 0444}
2014-11-13 23:04:47,307 - ExecuteHadoop['fs -copyFromLocal /usr/hdp/current/pig-client/pig.tar.gz
hdfs:///hdp/apps/'] {'not_if': "su - hdfs -c ' export PATH=$PATH:/usr/hdp/current/hadoop-client/bin
; hadoop fs -ls hdfs:///hdp/apps/' >/dev/null 2>&1",
'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/etc/hadoop/conf'}
2014-11-13 23:04:50,535 - Execute['hadoop --config /etc/hadoop/conf fs -copyFromLocal /usr/hdp/current/pig-client/pig.tar.gz
hdfs:///hdp/apps/'] {'logoutput': False, 'path': ['/usr/hdp/current/hadoop-client/bin'],
'tries': 1, 'user': 'hdfs', 'try_sleep': 0}
2014-11-13 23:04:56,250 - ExecuteHadoop['fs -chown hdfs:hadoop hdfs:///hdp/apps/']
{'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/etc/hadoop/conf'}
2014-11-13 23:04:56,251 - Execute['hadoop --config /etc/hadoop/conf fs -chown hdfs:hadoop
hdfs:///hdp/apps/'] {'logoutput': False, 'path': ['/usr/hdp/current/hadoop-client/bin'],
'tries': 1, 'user': 'hdfs', 'try_sleep': 0}
2014-11-13 23:04:59,509 - ExecuteHadoop['fs -chmod 444 hdfs:///hdp/apps/']
{'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/etc/hadoop/conf'}
2014-11-13 23:04:59,510 - Execute['hadoop --config /etc/hadoop/conf fs -chmod 444 hdfs:///hdp/apps/']
{'logoutput': False, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hdfs',
'try_sleep': 0}
which I confirmed again by running,
[root@c6401 ~]# hdfs dfs -ls hdfs:///hdp/apps/
-r--r--r--   3 hdfs hadoop   97145582 2014-11-13 23:04 hdfs:///hdp/apps/


I then tried to create the cluster using Blueprints (see attachments, which are based off
the first cluster)
*3-node-cluster-blueprint.json* contains the original blueprint in which PIG client is also
in host_group_1 that contains WebHCat server
*3-node-cluster-blueprint-no-pig-client.json* does not contain the PIG client on host_group_1,
but creating the blueprint through the API should still add it to the group

curl -X POST -u admin:admin -H 'X-Requested-By:1'
-d @3-node-cluster-blueprint.json
curl -X POST -u admin:admin -H 'X-Requested-By:1'
-d @3-node-cluster-blueprint-no-pig-client.json

I then verified that
did contain PIG in host_group_1

Next, apply the topology for the blueprint that did not originally contain PIG,
curl -X POST -u admin:admin -H 'X-Requested-By:1'
-d @3-node-topology.json

Finally, verify that the PIG client was correctly installed on the WebHCat server host.
[] shows,, and

> WebHCat server needs Pig client to copy tarball to HDFS
> -------------------------------------------------------
>                 Key: AMBARI-8323
>                 URL:
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 1.7.0
>            Reporter: Alejandro Fernandez
>            Assignee: Alejandro Fernandez
>            Priority: Blocker
>             Fix For: 1.7.0
>         Attachments: AMBARI-8323.patch
> The WebHcat server tries to copy the pig.tar.gz file to HDFS if the file exists in /usr/hdp/current/pig-client/pig.tar.gz.
> The file will only exist there if the host also has the Pig client installed.
> For this reason, installing the WebHCat server must also install the Pig client, and
this should work for both the Install Wizard and Blueprints.

This message was sent by Atlassian JIRA

View raw message