hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "philo vivero (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1954) Improve corrupt files warning message
Date Fri, 27 May 2011 20:57:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13040437#comment-13040437
] 

philo vivero commented on HDFS-1954:
------------------------------------

Patrick, thanks for the advocacy and persistence. Todd, Suresh, et al, thanks for trying to
keep the quality high. And most of all, thanks everyone for compromising on the "best we can
do for now" instead of leaving it as was: I think this will save many handfuls of hair from
being pulled in the coming year or two!

> Improve corrupt files warning message
> -------------------------------------
>
>                 Key: HDFS-1954
>                 URL: https://issues.apache.org/jira/browse/HDFS-1954
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: philo vivero
>            Assignee: Patrick Hunt
>         Attachments: HDFS-1954.patch, HDFS-1954.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> On NameNode web interface, you may get this warning:
>   WARNING : There are about 32 missing blocks. Please check the log or run fsck.
> If the cluster was started less than 14 days before, it would be great to add: "Is dfs.data.dir
defined?"
> If at the point of that error message, that parameter could be checked, and error made
"OMG dfs.data.dir isn't defined!" that'd be even better. As is, troubleshooting undefined
parameters is a difficult proposition.
> I suspect this is an easy fix.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message