hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shweta (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-13882) Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10
Date Wed, 05 Sep 2018 18:02:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16604733#comment-16604733
] 

Shweta commented on HDFS-13882:
-------------------------------

Thanks for the patch [~knanasi]. As seen above, Jenkins complains about unit test failures.
Please check if they are relevant to the changes you made and if these test run fine locally
for you.
Also, The TestWebHdfsTimeouts.testAuthUrlConnectTimeout has failed in the past for https://issues.apache.org/jira/browse/HDFS-10905.
Please check if it is relevant in any way. 

> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-13882
>                 URL: https://issues.apache.org/jira/browse/HDFS-13882
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.1.0
>            Reporter: Kitti Nanasi
>            Assignee: Kitti Nanasi
>            Priority: Major
>         Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java io exception
"Unable to close file because the last block does not have enough number of replicas" on client
file closure. The common workaround is to increase dfs.client.block.write.locateFollowingBlock.retries
from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message