hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1901) Jobs should not submit the same jar files over and over again
Date Mon, 16 Aug 2010 16:47:25 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12898983#action_12898983
] 

dhruba borthakur commented on MAPREDUCE-1901:
---------------------------------------------

> use a model like AFS with callbacks to implement a on-disk cache

It means that a central authority stores  the mapping of active callbacks and their associated
clients (and files). if a client dies prematurely, the central authority should have the option
to recover that callback and hand it over to a newly requesting client. are you proposing
that the NN and/or JT be this central authority? 

> Jobs should not submit the same jar files over and over again
> -------------------------------------------------------------
>
>                 Key: MAPREDUCE-1901
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1901
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>            Reporter: Joydeep Sen Sarma
>         Attachments: 1901.PATCH
>
>
> Currently each Hadoop job uploads the required resources (jars/files/archives) to a new
location in HDFS. Map-reduce nodes involved in executing this job would then download these
resources into local disk.
> In an environment where most of the users are using a standard set of jars and files
(because they are using a framework like Hive/Pig) - the same jars keep getting uploaded and
downloaded repeatedly. The overhead of this protocol (primarily in terms of end-user latency)
is significant when:
> - the jobs are small (and conversantly - large in number)
> - Namenode is under load (meaning hdfs latencies are high and made worse, in part, by
this protocol)
> Hadoop should provide a way for jobs in a cooperative environment to not submit the same
files over and again. Identifying and caching execution resources by a content signature (md5/sha)
would be a good alternative to have available.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message