hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6448) change BlockReaderLocalLegacy timeout detail
Date Tue, 27 May 2014 18:02:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010003#comment-14010003
] 

Colin Patrick McCabe commented on HDFS-6448:
--------------------------------------------

Socket timeout seems reasonable to me.  DFSInputStream uses socketTimeout to get a proxy to
talk to the DN, in code like this:

{code}
  /** Read the block length from one of the datanodes. */
  private long readBlockLength(LocatedBlock locatedblock) throws IOException {
...
      try {
        cdp = DFSUtil.createClientDatanodeProtocolProxy(datanode,
            dfsClient.getConfiguration(), dfsClient.getConf().socketTimeout,
            dfsClient.getConf().connectToDnViaHostname, locatedblock);
{code}

So I am +1 on this patch.

bq. yes, we employed hadoop2.0, only the legacy HDFS-2246 available. I took a quick look at
the HDFS-347 SCR code while making patch and did not find the same issue(to be honest, i am
not familiar with this piece of code, so probably just i missed it). i think Colin Patrick
McCabe have the exact answer definitely

Just as a note, we kept around the legacy block reader local only because HDFS-347 wasn't
implemented on Windows.  If you are not using Windows, then I would recommend upgrading and
using the new one ASAP... HDFS-2246 has a lot of problems besides this (its failure handling
code is fairly buggy, especially in older releases.)

bq. Do you know if this is is only an issue in HDFS-2246 SCR? Is it present in HDFS-347 SCRs?

HDFS-347 uses {{socketTimeout}}.  The relevant code is in {{BlockReaderFactory#nextDomainPeer}}

> change BlockReaderLocalLegacy timeout detail
> --------------------------------------------
>
>                 Key: HDFS-6448
>                 URL: https://issues.apache.org/jira/browse/HDFS-6448
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>    Affects Versions: 3.0.0, 2.4.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>         Attachments: HDFS-6448.txt
>
>
> Our hbase deployed upon hadoop2.0, in one accident, we hit HDFS-5016 in HDFS side, but
we also found from HBase side, the dfs client was hung at getBlockReader, after reading code,
we found there is a timeout setting in current codebase though, but the default hdfsTimeout
value is "-1"  ( from Client.java:getTimeout(conf) )which means no timeout...
> The hung stack trace like following:
> at $Proxy21.getBlockLocalPathInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:215)
> at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:267)
> at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:180)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:812)
> One feasible fix is replacing the hdfsTimeout with socketTimeout. see attached patch.
Most of credit should give [~liushaohui]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message