hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-13882) Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10
Date Wed, 12 Sep 2018 04:12:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16611566#comment-16611566
] 

Arpit Agarwal commented on HDFS-13882:
--------------------------------------

Sorry I missed responding to this earlier. We set this to 7 for a couple of our busier customers
to fix the same issue and that worked. 7 retries works out to ~50 seconds. However I am unsure
about increasing this across the board for everyone wrt cascading side effects.


> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-13882
>                 URL: https://issues.apache.org/jira/browse/HDFS-13882
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.1.0
>            Reporter: Kitti Nanasi
>            Assignee: Kitti Nanasi
>            Priority: Major
>         Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java io exception
"Unable to close file because the last block does not have enough number of replicas" on client
file closure. The common workaround is to increase dfs.client.block.write.locateFollowingBlock.retries
from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message