hadoop-pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ashutosh Chauhan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (PIG-872) use distributed cache for the replicated data set in FR join
Date Thu, 19 Nov 2009 18:03:39 GMT

    [ https://issues.apache.org/jira/browse/PIG-872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12780174#action_12780174
] 

Ashutosh Chauhan commented on PIG-872:
--------------------------------------

I think original intent was *not* to hard code the fact that fragmented input should be the
first input. I think its good to have that flexibility (e.g., if we later decide that ordering
of join inputs should be consistent across different join algorithms and thus fragmented input
should be last, in line with symmetric hash join). This has led to this twisted need for having
fragmented input represented as null in replFiles[].  Nonetheless, it could be fixed such
that replFiles[] consist of exactly n-1 values with no nulls. However, that will make this
patch bigger and is kind of orthogonal to this issue. So, I will suggest to track that in
separate jira, if we think that is something that should be fixed.

> use distributed cache for the replicated data set in FR join
> ------------------------------------------------------------
>
>                 Key: PIG-872
>                 URL: https://issues.apache.org/jira/browse/PIG-872
>             Project: Pig
>          Issue Type: Improvement
>            Reporter: Olga Natkovich
>            Assignee: Sriranjan Manjunath
>         Attachments: PIG_872.patch
>
>
> Currently, the replicated file is read directly from DFS by all maps. If the number of
the concurrent maps is huge, we can overwhelm the NameNode with open calls.
> Using distributed cache will address the issue and might also give a performance boost
since the file will be copied locally once and the reused by all tasks running on the same
machine.
> The basic approach would be to use cacheArchive to place the file into the cache on the
frontend and on the backend, the tasks would need to refer to the data using path from the
cache.
> Note that cacheArchive does not work in Hadoop local mode. (Not a problem for us right
now as we don't use it.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message