hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ning Zhang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-767) Job failure due to BlockMissingException
Date Fri, 13 Nov 2009 00:58:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12777311#action_12777311

Ning Zhang commented on HDFS-767:

Hi Todd, 

Thanks for the link. Dhruba also suggested it before. It works pretty well if we have a good
estimation of the time to serve one block, which can be used as the "slot time" specified
in the algorithm. I think it works great for Ethernet where the size of a frame transmitted
in Ethernet is fixed and we have a pretty good idea on how much time we should wait before

In the case of HDFS, the size of the data to be read could range from several KB to hundreds
of of MB. The time spent on serving a request could range from sub-millisecond to several
seconds. So it is hard to configure the slot time to catch the norm of the request serving
time. We can certainly set the slot time small enough to accommodate the case of short requests,
and the # of retries very large to accommodate the case of long requests. But the worst case
is unbounded. e.g., the wait time always start from 0 makes it possible that no matter how
many retries the the wait time could be a very small number so that the job fails. This is
OK for Ethernet since there are other protocols on top of it that add another layer of fault

Since DFSClient is already at the top layer of DFS and we don't want clients to worry too
much about fault tolerance, it would be nice to have an upper bound of retries. The effect
of the proposed formula is similar to exponential backoff in the case of a large number of
short requests. But the former takes the # of failures into consideration when calculating
the wait time. The # of failures acts as an indication of how busy the block is and how much
time (non-zero) we should wait. In the worst case, each retry will have at least 256 clients
get the block (assuming serving a block cost < 3 sec). And there is a fixed upper bound
of retries: (max # of mapper or reducer slots) / 256.  

> Job failure due to BlockMissingException
> ----------------------------------------
>                 Key: HDFS-767
>                 URL: https://issues.apache.org/jira/browse/HDFS-767
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ning Zhang
> If a block is request by too many mappers/reducers (say, 3000) at the same time, a BlockMissingException
is thrown because it exceeds the upper limit (I think 256 by default) of number of threads
accessing the same block at the same time. The DFSClient wil catch that exception and retry
3 times after waiting for 3 seconds. Since the wait time is a fixed value, a lot of clients
will retry at about the same time and a large portion of them get another failure. After 3
retries, there are about 256*4 = 1024 clients got the block. If the number of clients are
more than that, the job will fail. 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message