ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sumit Mohanty <smoha...@hortonworks.com>
Subject Re: When does tar copying happen ?
Date Wed, 30 Dec 2015 06:32:49 GMT
?Hive Server Start copies sqoop tarball - but only if it exists on the same host where Hive
Server is deployed.


I see some code in spark_service.py that copies the tez tarball. You can use that as a reference.

________________________________
From: Jeff Zhang <zjffdu@gmail.com>
Sent: Tuesday, December 29, 2015 9:51 PM
To: user@ambari.apache.org
Cc: dev@ambari.apache.org
Subject: Re: When does tar copying happen ?

>>> I believe sqoop tarball is uploaded as part of HIVE installation.
I don't think so. Because I installed hive, but no sqoop tarball found.
Actually I'd like to upload spark jar like other tarball when installing spark. Could you
guide me how to do that ?



On Wed, Dec 30, 2015 at 1:46 PM, Sumit Mohanty <smohanty@hortonworks.com<mailto:smohanty@hortonworks.com>>
wrote:

Ambaripreupload.py is not used during Ambari based cluster installations.


I believe sqoop tarball is uploaded as part of HIVE installation.


-Sumit

________________________________
From: Jeff Zhang <zjffdu@gmail.com<mailto:zjffdu@gmail.com>>
Sent: Tuesday, December 29, 2015 9:42 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>; dev@ambari.apache.org<mailto:dev@ambari.apache.org>
Subject: When does tar copying happen ?

I install sqoop separately, but find there's no sqoop tar ball uploaded to hdfs.
I find the uploading script in Ambaripreupload.py, and wondering when this script called.
Is it called only when the first hdp installation ? Then some tar ball may missing if I install
them separately.



print "Copying tarballs..."

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop/mapreduce.tar.gz"), hdfs_path_prefix+"/hdp/apps/{{
hdp_stack_version }}/mapreduce/", 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/tez/lib/tez.tar.gz"), hdfs_path_prefix+"/hdp/apps/{{
hdp_stack_version }}/tez/", 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hive/hive.tar.gz"), hdfs_path_prefix+"/hdp/apps/{{
hdp_stack_version }}/hive/", 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/pig/pig.tar.gz"), hdfs_path_prefix+"/hdp/apps/{{
hdp_stack_version }}/pig/", 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop-mapreduce/hadoop-streaming.jar"),
hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/", 'hadoop-mapreduce-historyserver',
params.mapred_user, params.hdfs_user, params.user_group)

  copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/sqoop/sqoop.tar.gz"), hdfs_path_prefix+"/hdp/apps/{{
hdp_stack_version }}/sqoop/", 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
params.user_group)



--
Best Regards

Jeff Zhang



--
Best Regards

Jeff Zhang

Mime
View raw message