hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zheng Shao <zsh...@gmail.com>
Subject Unable to complete writing a file that encountered error
Date Tue, 19 Jan 2010 20:31:56 GMT
Sometimes DFSClient encounter an error when writing a file. All data
nodes for the current block is gone.

In that case, DFSClient has only two options:
1. Delete the file.
2. Do nothing. The namenode will keep a lease on the file, and the
DFSClient will have an idling thread for that file.

For option 1, we won't be able to recover even the blocks that were
successfully written.
For option 2, we are wasting our resources - we should be able to
clean up the lease and the thread, without shutting down the DFSClient
and wait for 1-hour timeout on the lease.

Is there anything better we can do in this case (recover all blocks
that are successfully written, AND clean up the lease and the thread)?
Is there any JIRA open for this?

Thanks Yajun for discovering this.


View raw message