hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5697) connection leak in DFSInputStream
Date Mon, 23 Dec 2013 02:54:50 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855348#comment-13855348
] 

Uma Maheswara Rao G commented on HDFS-5697:
-------------------------------------------

Good catch Haitao!  
Add a debug message on exception?

> connection leak in DFSInputStream
> ---------------------------------
>
>                 Key: HDFS-5697
>                 URL: https://issues.apache.org/jira/browse/HDFS-5697
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Haitao Yao
>         Attachments: HDFS-5697.patch
>
>
> While getting the BlockReader from DFSInputStream, if the cache is miss, the DFSInputStream
creates a new peer. But if error occured when creating the new blockreader with the give peer
and IOException is thrown, the created peer is not closed and will cause too many CLOSE-WAIT
status.
> here's the stacktrace:
> java.io.IOException: Got error for OP_READ_BLOCK, self=/10.130.100.32:26657, remote=/10.130.100.32:50010,
for file /hbase/STAT_RESULT_SALT/d17e9cf1d1de34910bc6724c7cc21ed8/_0/c75770dbed6444488b609385e8bc9e0d,
for pool BP-2041309608-10.130.100.157-1361861188734 block -7893680960325255689_107620083
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:429)
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:394)
>         at org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>         at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>         at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538)
>         at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794)
>         at java.io.DataInputStream.read(DataInputStream.java:149)
>         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1409)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1921)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1703)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:338)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:997)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:229)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
>         at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:165)
> So there should be a catch clause at the end of the function to check if IOException
is thrown , the peer should be closed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message