hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1263) 0.20: in tryUpdateBlock, the meta file is renamed away before genstamp validation is done
Date Wed, 23 Jun 2010 17:30:56 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12881774#action_12881774

Todd Lipcon commented on HDFS-1263:

bq. can you also explain the error state that results? (truncated blocks, infinite loops,
bad meta-data, etc)

Yea, what happened is that we had three replicas, and this managed to happen on all three,
somehow, due to multiple concurrent recoveries. It's tough to parse out from the logs, but
I think basically it was the following sequence of events:

1. DN A told to recover blk_B_1
2. DN A gets block info from all DNs, and a new genstamp 2
3. DN A gets disconnected from network, gets swapped out, whatever, for a minute
4. NN times out the block recovery lease after 60s and another recovery is initiated by a
client still calling appendFile()
5. DN B is told to recover blk_B_1
6. DN B starts to get block info from all nodes - this takes a while trying to talk to DN
A because it's still paused
7. DN A comes back to life
8. DN B receives block info from A and asks for new genstamp (3)
9. DN B wins the updateBlock race, and updates all replicas to genstamp 3
10. DN A calls updateBlock on all replicas, asking to go genstamp 1 -> genstamp 2. This
fails because cur genstamp is 3 on all replicas. In the process of failing, it effectively
trashes the meta file by renaming it
11.All further attempts to recover the block fail because all replicas get the "no meta file"

The above seems really contrived, but I'm pretty sure that's what I saw happen :) This JIRA
deals with step 10 - a stale updateBlock call coming from some DN that had paused during recovery
results in corrupting the replica (by removing meta)

It's also suspicious that a node will allow recovery to start (the startBlockRecovery) call
if it thinks it's already the primary DN for recovery on that block. To fix that, we could
make startBlockRecovery throw an IOE if it finds the block in the ongoingRecovery map and
the call is not coming from itself.

> 0.20: in tryUpdateBlock, the meta file is renamed away before genstamp validation is
> -----------------------------------------------------------------------------------------
>                 Key: HDFS-1263
>                 URL: https://issues.apache.org/jira/browse/HDFS-1263
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20-append
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>             Fix For: 0.20-append
> Saw an issue where multiple datanodes are trying to recover at the same time, and all
of them failed. I think the issue is in FSDataset.tryUpdateBlock, we do the rename of blk_B_OldGS
to blk_B_OldGS_tmpNewGS and *then* check that the generation stamp is moving upwards. Because
of this, invalid update block calls are blocked, but they then cause future updateBlock calls
to fail with "Meta file not found" errors.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message