hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-7608) hdfs dfsclient newConnectedPeer has no write timeout
Date Tue, 12 May 2015 04:42:01 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Chris Nauroth updated HDFS-7608:
    Status: Open  (was: Patch Available)

[~esteban], thank you for rebasing, but this patch cannot be committed.  As I described here,
it would potentially change the existing behavior much more than intended.


I'm clicking Cancel Patch to make it clearer that this patch isn't ready.

> hdfs dfsclient  newConnectedPeer has no write timeout
> -----------------------------------------------------
>                 Key: HDFS-7608
>                 URL: https://issues.apache.org/jira/browse/HDFS-7608
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fuse-dfs, hdfs-client
>    Affects Versions: 2.6.0, 2.3.0
>         Environment: hdfs 2.3.0  hbase 0.98.6
>            Reporter: zhangshilong
>            Assignee: Xiaoyu Yao
>              Labels: BB2015-05-TBR, patch
>         Attachments: HDFS-7608.0.patch, HDFS-7608.1.patch, HDFS-7608.2.patch
>   Original Estimate: 24h
>  Remaining Estimate: 24h
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read or write
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the problem,but did
not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no need to force
adding timeout by themselives. 

This message was sent by Atlassian JIRA

View raw message