hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thanh Do (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1479) Massive file deletion causes some timeouts in writers
Date Tue, 02 Nov 2010 21:51:08 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12927633#action_12927633

Thanh Do commented on HDFS-1479:

Thanks, Zheng for the explanation.
The reason I couldn't find the AsyncDiskService because I was looking at 0.20.2
where deletion at datanode is done synchronously. Now I find it in 0.21.0.
In general, how do you plan to fix this?

> Massive file deletion causes some timeouts in writers
> -----------------------------------------------------
>                 Key: HDFS-1479
>                 URL: https://issues.apache.org/jira/browse/HDFS-1479
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 0.20.2
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>            Priority: Minor
> When we do a massive deletion of files, we saw some timeouts in writers who's writing
to HDFS. This does not happen to all DataNodes, but it's happening regularly enough that we
would like to fix it.
> {code}
> yyy.xxx.com: 10/10/25 00:55:32 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block blk_-5459995953259765112_37619608java.net.SocketTimeoutException: 69000
millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/ remote=/]
> {code}
> This is caused by the default setting of AsyncDiskService, which starts 4 threads per
volume to delete files.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message