hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-7608) hdfs dfsclient newConnectedPeer has no write timeout
Date Wed, 06 May 2015 03:30:14 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer updated HDFS-7608:
-----------------------------------
    Labels: BB2015-05-TBR patch  (was: patch)

> hdfs dfsclient  newConnectedPeer has no write timeout
> -----------------------------------------------------
>
>                 Key: HDFS-7608
>                 URL: https://issues.apache.org/jira/browse/HDFS-7608
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fuse-dfs, hdfs-client
>    Affects Versions: 2.3.0, 2.6.0
>         Environment: hdfs 2.3.0  hbase 0.98.6
>            Reporter: zhangshilong
>            Assignee: Xiaoyu Yao
>              Labels: BB2015-05-TBR, patch
>         Attachments: HDFS-7608.0.patch, HDFS-7608.1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read or write
timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the problem,but did
not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no need to force
adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message