hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joep Rottinghuis (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)
Date Wed, 12 Jun 2013 08:18:21 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13681033#comment-13681033

Joep Rottinghuis commented on HADOOP-9639:

This is an important issue for us at Twitter.
Our clusters are running 10's of thousands of jobs a day, and this jar copying becomes a significant
Even though the default block size is set to 512 MB, we end up with average block sizes of
95-150 MB (depending on cluster; is is also not necessarily true that each job reads the average
block size, but still...).
It is also common practice for our users to pack all their dependencies (and then some) in
one large job jar that is submitted to the cluster. These jars range from small (6-15 MB)
in some cases to fairly large 80-130 MB in most cases.
Having a job jar in the same order of magnitude as the block size is enormously wasteful.
When a cluster is hundreds to thousands of nodes, the overhead becomes apparent at two ends
of the scale:
a) Tiny jobs of just a few mappers running very often. For example, for a 1 mapper job, the
jobfile gets copied to HDFS (3 nodes) and then to the node actually running the task (probably
a different node). That is 4 copies of the jar.

b) For medium jobs in terms of # mappers+reducers approaching the cluster size in # nodes.
Then the odds are that each node ends up running at least one map instance. That means that
the job jar is copied to every single node (into the thousands). Only when a node runs multiple
tasks for a node will the local caching help to some extend.

Having a truly shared cache across jobs helps in case of a) because between repeated small
job runs the jar will then get copied only once. For the large cases, the first time a job
jar is used the cluster will take the hit to copy the jar to each node. After that the repeated
job runs will be able to leverage the cache.

When users split out a monolithic jar into multiple smaller jars (using libjars ?) the benefits
will further increase.

> truly shared cache for jars (jobjar/libjar)
> -------------------------------------------
>                 Key: HADOOP-9639
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9639
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: filecache
>    Affects Versions: 2.0.4-alpha
>            Reporter: Sangjin Lee
> Currently there is the distributed cache that enables you to cache jars and files so
that attempts from the same job can reuse them. However, sharing is limited with the distributed
cache because it is normally on a per-job basis. On a large cluster, sometimes copying of
jobjars and libjars becomes so prevalent that it consumes a large portion of the network bandwidth,
not to speak of defeating the purpose of "bringing compute to where data is". This is wasteful
because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared cache so that
multiple jobs from multiple users can share and cache jars. This JIRA is to open the discussion.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message