hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hemanth Yamijala (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-5286) DFS client blocked for a long time reading blocks of a file on the JobTracker
Date Thu, 19 Feb 2009 12:34:02 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Hemanth Yamijala updated HADOOP-5286:

    Attachment: jt-log-for-blocked-reads.txt

The attached snippet from the JobTracker log indicates the exceptions thrown by the DFS client.
Also, please note the timestamps between the messages. Ultimately the system recovered after
almost 90 minutes and continued to process this job.

> DFS client blocked for a long time reading blocks of a file on the JobTracker
> -----------------------------------------------------------------------------
>                 Key: HADOOP-5286
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5286
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.20.0
>            Reporter: Hemanth Yamijala
>         Attachments: jt-log-for-blocked-reads.txt
> On a large cluster, we've observed that DFS client was blocked on reading a block of
a file for almost 1 and half hours. The file was being read by the JobTracker of the cluster,
and was a split file of a job. On the NameNode logs, we observed that the block had a message
as follows:
> Inconsistent size for block blk_2044238107768440002_840946 reported from <ip>:<port>
current size is 195072 reported size is 1318567
> Details follow.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message