hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kitti Nanasi (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock
Date Wed, 12 Sep 2018 20:56:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Kitti Nanasi updated HDFS-13882:
--------------------------------
    Attachment: HDFS-13882.002.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> ----------------------------------------------------------------
>
>                 Key: HDFS-13882
>                 URL: https://issues.apache.org/jira/browse/HDFS-13882
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.1.0
>            Reporter: Kitti Nanasi
>            Assignee: Kitti Nanasi
>            Priority: Major
>         Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch
>
>
> More and more we are seeing cases where customers are running into the java io exception
"Unable to close file because the last block does not have enough number of replicas" on client
file closure. The common workaround is to increase dfs.client.block.write.locateFollowingBlock.retries
from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message