hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bryan Pendleton (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-35) Files missing chunks can cause mapred runs to get stuck
Date Mon, 13 Feb 2006 23:28:43 GMT
     [ http://issues.apache.org/jira/browse/HADOOP-35?page=all ]

Bryan Pendleton updated HADOOP-35:

    Attachment: dfsshell.health.patch.txt

> Files missing chunks can cause mapred runs to get stuck
> -------------------------------------------------------
>          Key: HADOOP-35
>          URL: http://issues.apache.org/jira/browse/HADOOP-35
>      Project: Hadoop
>         Type: Bug
>   Components: dfs
>  Environment: ~20 datanode DFS cluster
>     Reporter: Bryan Pendleton
>  Attachments: dfsshell.health.patch.txt
> I've now several times run into a problem where a large run gets stalled as a result
of a missing data block. The latest was a stall in the Summer - ie, the data might've all
been there, but it was impossible to proceed because the CRC file was missing a block. It
would be nice to:
> 1) Have a "health check" running on a map reduce. If any data isn't available, emmit
periodic warnings, and maybe have a timeout for if the data never comes back. Such warnings
*should* specify which file(s) are affected by the missing blocks.
> 2) Have a utility, possible part of the existing dfs utility, which can check for dfs
files with unlocatable blocks. Possibly, even show a 'health' of a file - ie, what percentage
of its blocks are currently at the desired replication level. Currently, there's no way that
I know of to find out if a file in DFS is going to be unreadable.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message