hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zheng Shao (JIRA)" <j...@apache.org>
Subject [jira] Created: (HDFS-1479) Massive file deletion causes some timeouts in writers
Date Tue, 26 Oct 2010 20:20:19 GMT
Massive file deletion causes some timeouts in writers
-----------------------------------------------------

                 Key: HDFS-1479
                 URL: https://issues.apache.org/jira/browse/HDFS-1479
             Project: Hadoop HDFS
          Issue Type: Improvement
    Affects Versions: 0.20.2
            Reporter: Zheng Shao
            Assignee: Zheng Shao
            Priority: Minor


When we do a massive deletion of files, we saw some timeouts in writers who's writing to HDFS.
This does not happen to all DataNodes, but it's happening regularly enough that we would like
to fix it.

{code}
yyy.xxx.com: 10/10/25 00:55:32 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception
 for block blk_-5459995953259765112_37619608java.net.SocketTimeoutException: 69000 millis
timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/10.10.10.10:56319 remote=/10.10.10.10:50010]
{code}

This is caused by the default setting of AsyncDiskService, which starts 4 threads per volume
to delete files.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message