hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ning Zhang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-767) Job failure due to BlockMissingException
Date Tue, 15 Dec 2009 00:06:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12790477#action_12790477

Ning Zhang commented on HDFS-767:

Thanks for the comments Steve and Todd. 

I checked the JDK source code (1.6.0_16) and Random() uses a very simple default seed:

     * Creates a new random number generator. This constructor sets
     * the seed of the random number generator to a value very likely
     * to be distinct from any other invocation of this constructor.
    public Random() { this(++seedUniquifier + System.nanoTime()); }
    private static volatile long seedUniquifier = 8682522807148012L;

Based on the discussion in Sun's forum: http://forums.sun.com/thread.jspa?threadID=5398150
, nanoTime is a native method and is implemented based on the CPU clock cycles. So I guess
the chance of getting the same value from nanoTime is not that high even though all machines
boot up at the same time. I agree that adding the machine's MAC address would greatly reduce
the conflict probability, I am also fine to make that change in the next version. The change
would be fairly simple since JDK1.6 has support get the MAC address.

> Job failure due to BlockMissingException
> ----------------------------------------
>                 Key: HDFS-767
>                 URL: https://issues.apache.org/jira/browse/HDFS-767
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ning Zhang
>            Assignee: Ning Zhang
>         Attachments: HDFS-767.patch
> If a block is request by too many mappers/reducers (say, 3000) at the same time, a BlockMissingException
is thrown because it exceeds the upper limit (I think 256 by default) of number of threads
accessing the same block at the same time. The DFSClient wil catch that exception and retry
3 times after waiting for 3 seconds. Since the wait time is a fixed value, a lot of clients
will retry at about the same time and a large portion of them get another failure. After 3
retries, there are about 256*4 = 1024 clients got the block. If the number of clients are
more than that, the job will fail. 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message