[ https://issues.apache.org/jira/browse/MAPREDUCE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308542#comment-16308542
]
Chris Trezzo commented on MAPREDUCE-6989:
-----------------------------------------
Hey [~miklos.szegedi@cloudera.com]! Thanks for the work so far! I have a question around the
high-level approach: Is there a reason why we can't leverage the shared cache for this? There
is already an upload mechanism that has been built, along with a cleaning mechanism and a
way to cache similar jars.
> [Umbrella] Uploader tool for Distributed Cache Deploy
> -----------------------------------------------------
>
> Key: MAPREDUCE-6989
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6989
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Reporter: Miklos Szegedi
> Assignee: Miklos Szegedi
> Attachments: MAPREDUCE-6989 Mapreduce framework uploader tool.pdf
>
>
> The proposal is to create a tool that collects all available jars in the Hadoop classpath
and adds them to a single tarball file. It then uploads the resulting archive to an HDFS directory.
This saves the cluster administrator from having to set this up manually for Distributed Cache
Deploy.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org
|