hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Liang Xie (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-6448) change BlockReaderLocalLegacy timeout detail
Date Mon, 26 May 2014 06:38:02 GMT
Liang Xie created HDFS-6448:
-------------------------------

             Summary: change BlockReaderLocalLegacy timeout detail
                 Key: HDFS-6448
                 URL: https://issues.apache.org/jira/browse/HDFS-6448
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs-client
    Affects Versions: 2.4.0, 3.0.0
            Reporter: Liang Xie
            Assignee: Liang Xie


Our hbase deployed upon hadoop2.0, in one accident, we hit HDFS-5016 in HDFS side, but we
also found from HBase side, the dfs client was hung at getBlockReader, after reading code,
we found there is a timeout setting in current codebase though, but the default hdfsTimeout
value is "-1"  ( from Client.java:getTimeout(conf) )which means no timeout...

The hung stack trace like this:
at $Proxy21.getBlockLocalPathInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:215)
at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:267)
at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:180)
at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:812)

One feasible fix is replacing it with socketTimeout. see attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message