hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thanh Do (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1479) Massive file deletion causes some timeouts in writers
Date Tue, 02 Nov 2010 20:22:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12927578#action_12927578

Thanh Do commented on HDFS-1479:

Zheng, can you explain why you need massive deletion? That is what kinds of application require
such operation? Thanks

> Massive file deletion causes some timeouts in writers
> -----------------------------------------------------
>                 Key: HDFS-1479
>                 URL: https://issues.apache.org/jira/browse/HDFS-1479
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 0.20.2
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>            Priority: Minor
> When we do a massive deletion of files, we saw some timeouts in writers who's writing
to HDFS. This does not happen to all DataNodes, but it's happening regularly enough that we
would like to fix it.
> {code}
> yyy.xxx.com: 10/10/25 00:55:32 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block blk_-5459995953259765112_37619608java.net.SocketTimeoutException: 69000
millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/ remote=/]
> {code}
> This is caused by the default setting of AsyncDiskService, which starts 4 threads per
volume to delete files.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message