hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Andrews <...@xoba.com>
Subject corrupt unreplicated block in dfs (0.18.3)
Date Thu, 26 Mar 2009 12:21:39 GMT
i noticed that when a file with no replication (i.e., replication=1)
develops a corrupt block, hadoop takes no action aside from the
datanode throwing an exception to the client trying to read the file.
i manually corrupted a block in order to observe this.

obviously, with replication=1 its impossible to fix the block, but i
thought perhaps hadoop would take some other action, such as deleting
the file outright, or moving it to a "corrupt" directory, or marking
it or keeping track of it somehow to note that there's un-fixable
corruption in the filesystem? thus, the current behaviour seems to
sweep the corruption under the rug and allows its continued existence,
aside from notifying the specific client doing the read with an
exception.

if anyone has any information about this issue or how to work around
it, please let me know.

on the other hand, i tested that corrupting a block in a replication=3
file causes hadoop to re-replicate the block from another existing
copy, which is good and is i what i expected.

best,
mike


-- 
permanent contact information at http://mikerandrews.com

Mime
View raw message