hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Boudnik (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.
Date Tue, 24 Nov 2009 18:22:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12782068#action_12782068

Konstantin Boudnik commented on HDFS-630:

I have missed this JIRA in the doing, but am going to comment on anyway. The comment is about
the newly added test which is developed for JUnit v.3
+public class TestDFSClientExcludedNodes extends TestCase {
I'd like to ask all reviewers to pay attention to the fact that new tests are suppose to be
written for JUnit v.4.
Here's a [short instruction|http://wiki.apache.org/hadoop/HowToDevelopUnitTests] on how it
should be done.

Also, the commit message has wrong JIRA number in it. It says HBASE-630 instead of HDFS-630

> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes
when locating the next block.
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HDFS-630
>                 URL: https://issues.apache.org/jira/browse/HDFS-630
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs client
>    Affects Versions: 0.21.0
>            Reporter: Ruyue Ma
>            Assignee: Cosmin Lehene
>            Priority: Minor
>         Attachments: 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 0001-Fix-HDFS-630-for-0.21.patch,
0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch,
0001-Fix-HDFS-630-trunk-svn-2.patch, HDFS-630.patch
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a newly allocated
block is not-connectable, it re-requests the NN to get a fresh set of replica locations of
the block. It tries this dfs.client.block.write.retries times (default 3), sleeping 6 seconds
between each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have few datanodes
in the cluster, every retry maybe pick the dead-datanode and the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the excluded datanodes.
The list of dead datanodes is only for one block allocation.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message