hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Prasad Chakka (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HIVE-900) Map-side join failed if there are large number of mappers
Date Thu, 29 Oct 2009 18:47:59 GMT

    [ https://issues.apache.org/jira/browse/HIVE-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771551#action_12771551
] 

Prasad Chakka commented on HIVE-900:
------------------------------------

just a of the wall idea, temporarily increase the replication factor for this block so that
it is available in more racks thus reducing the network cost and also avoiding BlockMissingException.
ofcourse, we need to find a way to reliably set the replication factor back to original setting.

> Map-side join failed if there are large number of mappers
> ---------------------------------------------------------
>
>                 Key: HIVE-900
>                 URL: https://issues.apache.org/jira/browse/HIVE-900
>             Project: Hadoop Hive
>          Issue Type: Improvement
>            Reporter: Ning Zhang
>            Assignee: Ning Zhang
>
> Map-side join is efficient when joining a huge table with a small table so that the mapper
can read the small table into main memory and do join on each mapper. However, if there are
too many mappers generated for the map join, a large number of mappers will simultaneously
send request to read the same block of the small table. Currently Hadoop has a upper limit
of the # of request of a the same block (250?). If that is reached a BlockMissingException
will be thrown. That cause a lot of mappers been killed. Retry won't solve but worsen the
problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message