ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Zhang <zjf...@gmail.com>
Subject Re: When does tar copying happen ?
Date Wed, 30 Dec 2015 05:51:55 GMT
>>> I believe sqoop tarball is uploaded as part of HIVE installation.
I don't think so. Because I installed hive, but no sqoop tarball found.
Actually I'd like to upload spark jar like other tarball when installing
spark. Could you guide me how to do that ?



On Wed, Dec 30, 2015 at 1:46 PM, Sumit Mohanty <smohanty@hortonworks.com>
wrote:

> Ambaripreupload.py is not used during Ambari based cluster installations.
>
>
> I believe sqoop tarball is uploaded as part of HIVE installation.
>
>
> -Sumit
> ------------------------------
> *From:* Jeff Zhang <zjffdu@gmail.com>
> *Sent:* Tuesday, December 29, 2015 9:42 PM
> *To:* user@ambari.apache.org; dev@ambari.apache.org
> *Subject:* When does tar copying happen ?
>
> I install sqoop separately, but find there's no sqoop tar ball uploaded to
> hdfs.
> I find the uploading script in Ambaripreupload.py, and wondering when this
> script called. Is it called only when the first hdp installation ? Then
> some tar ball may missing if I install them separately.
>
>
> print "Copying tarballs..."
>
>
> copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop/mapreduce.tar.gz"),
> hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/",
> 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
> params.user_group)
>
>
> copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/tez/lib/tez.tar.gz"),
> hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/tez/",
> 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
> params.user_group)
>
>   copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hive/hive.tar.gz"),
> hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/hive/",
> 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
> params.user_group)
>
>   copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/pig/pig.tar.gz"),
> hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/pig/",
> 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
> params.user_group)
>
>
> copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/hadoop-mapreduce/hadoop-streaming.jar"),
> hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/mapreduce/",
> 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
> params.user_group)
>
>
> copy_tarballs_to_hdfs(format("/usr/hdp/{hdp_version}/sqoop/sqoop.tar.gz"),
> hdfs_path_prefix+"/hdp/apps/{{ hdp_stack_version }}/sqoop/",
> 'hadoop-mapreduce-historyserver', params.mapred_user, params.hdfs_user,
> params.user_group)
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Best Regards

Jeff Zhang

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message