ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Fernandez (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-8323) WebHCat server needs Pig client to copy tarball to HDFS
Date Thu, 13 Nov 2014 23:39:34 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14211485#comment-14211485
] 

Alejandro Fernandez commented on AMBARI-8323:
---------------------------------------------

I verified the patch works,

I created a cluster through the Install Wizard with the following topology,
||Host  ||  Datanode || NameManager || Client ||
|Host 1|  Y               | Y                        |            |
|Host 2|  Y               |                           | Y         |
|Host 3|  Y               |                           | Y         |  

And it correctly installed the Pig client on Host 1, which is the Webhcat server.

[http://c6401.ambari.apache.org:8080/api/v1/clusters/dev/services/HIVE/components/WEBHCAT_SERVER]
shows Host 1
[http://c6401.ambari.apache.org:8080/api/v1/clusters/dev/services/PIG/components/PIG] shows
Host 1, Host 2, and Host 3.

Further, the WebHCat Server Start log shows,
{code}
2014-11-13 23:04:31,354 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz']
{'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir':
'/etc/hadoop/conf'}
2014-11-13 23:04:31,355 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz']
{'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat',
'try_sleep': 0}
2014-11-13 23:04:34,535 - ls: `hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz': No such file
or directory
2014-11-13 23:04:34,535 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-1971/pig'] {'security_enabled':
False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local':
'', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action':
['create']}
2014-11-13 23:04:34,537 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop
| grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-1971/pig && hadoop --config
/etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-1971/pig && hadoop --config
/etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-1971/pig'] {'not_if': "su - hdfs
-c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf
fs -ls hdfs:///hdp/apps/2.2.0.0-1971/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2014-11-13 23:04:47,306 - CopyFromLocal['/usr/hdp/current/pig-client/pig.tar.gz'] {'hadoop_bin_dir':
'/usr/hdp/current/hadoop-client/bin', 'group': 'hadoop', 'hdfs_user': 'hdfs', 'owner': 'hdfs',
'kinnit_if_needed': '', 'dest_dir': 'hdfs:///hdp/apps/2.2.0.0-1971/pig', 'hadoop_conf_dir':
'/etc/hadoop/conf', 'mode': 0444}
2014-11-13 23:04:47,307 - ExecuteHadoop['fs -copyFromLocal /usr/hdp/current/pig-client/pig.tar.gz
hdfs:///hdp/apps/2.2.0.0-1971/pig'] {'not_if': "su - hdfs -c ' export PATH=$PATH:/usr/hdp/current/hadoop-client/bin
; hadoop fs -ls hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz' >/dev/null 2>&1",
'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/etc/hadoop/conf'}
2014-11-13 23:04:50,535 - Execute['hadoop --config /etc/hadoop/conf fs -copyFromLocal /usr/hdp/current/pig-client/pig.tar.gz
hdfs:///hdp/apps/2.2.0.0-1971/pig'] {'logoutput': False, 'path': ['/usr/hdp/current/hadoop-client/bin'],
'tries': 1, 'user': 'hdfs', 'try_sleep': 0}
2014-11-13 23:04:56,250 - ExecuteHadoop['fs -chown hdfs:hadoop hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz']
{'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/etc/hadoop/conf'}
2014-11-13 23:04:56,251 - Execute['hadoop --config /etc/hadoop/conf fs -chown hdfs:hadoop
hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz'] {'logoutput': False, 'path': ['/usr/hdp/current/hadoop-client/bin'],
'tries': 1, 'user': 'hdfs', 'try_sleep': 0}
2014-11-13 23:04:59,509 - ExecuteHadoop['fs -chmod 444 hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz']
{'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/etc/hadoop/conf'}
2014-11-13 23:04:59,510 - Execute['hadoop --config /etc/hadoop/conf fs -chmod 444 hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz']
{'logoutput': False, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hdfs',
'try_sleep': 0}
{code}
which I confirmed again by running,
{code}
[root@c6401 ~]# hdfs dfs -ls hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz
-r--r--r--   3 hdfs hadoop   97145582 2014-11-13 23:04 hdfs:///hdp/apps/2.2.0.0-1971/pig/pig.tar.gz
{code}

----

I then tried to create the cluster using Blueprints (see attachments, which are based off
the first cluster)
*3-node-cluster-blueprint.json* contains the original blueprint in which PIG client is also
in host_group_1 that contains WebHCat server
*3-node-cluster-blueprint-no-pig-client.json* does not contain the PIG client on host_group_1,
but creating the blueprint through the API should still add it to the group

curl -X POST -u admin:admin -H 'X-Requested-By:1' http://c6404.ambari.apache.org:8080/api/v1/blueprints/original
-d @3-node-cluster-blueprint.json
curl -X POST -u admin:admin -H 'X-Requested-By:1' http://c6404.ambari.apache.org:8080/api/v1/blueprints/original-no-pig
-d @3-node-cluster-blueprint-no-pig-client.json

I then verified that http://c6404.ambari.apache.org:8080/api/v1/blueprints/original-no-pig
did contain PIG in host_group_1

Next, apply the topology for the blueprint that did not originally contain PIG,
curl -X POST -u admin:admin -H 'X-Requested-By:1' http://c6404.ambari.apache.org:8080/api/v1/clusters/dev
-d @3-node-topology.json

Finally, verify that the PIG client was correctly installed on the WebHCat server host.
[http://c6404.ambari.apache.org:8080/api/v1/clusters/dev/services/HIVE/components/WEBHCAT_SERVER]
shows c6404.ambari.apache.org
[http://c6404.ambari.apache.org:8080/api/v1/clusters/dev/services/PIG/components/PIG] shows
c6404.ambari.apache.org, c6405.ambari.apache.org, and c6406.ambari.apache.org

> WebHCat server needs Pig client to copy tarball to HDFS
> -------------------------------------------------------
>
>                 Key: AMBARI-8323
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8323
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 1.7.0
>            Reporter: Alejandro Fernandez
>            Assignee: Alejandro Fernandez
>            Priority: Blocker
>             Fix For: 1.7.0
>
>         Attachments: AMBARI-8323.patch
>
>
> The WebHcat server tries to copy the pig.tar.gz file to HDFS if the file exists in /usr/hdp/current/pig-client/pig.tar.gz.
> The file will only exist there if the host also has the Pig client installed.
> For this reason, installing the WebHCat server must also install the Pig client, and
this should work for both the Install Wizard and Blueprints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message