hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Abhishek Singh Chouhan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13601) Connection leak during log splitting
Date Thu, 30 Apr 2015 15:57:06 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14521715#comment-14521715
] 

Abhishek Singh Chouhan commented on HBASE-13601:
------------------------------------------------

In HLogFactory

{noformat}
public static HLog.Reader createReader(final FileSystem fs, final Path path,
      Configuration conf, CancelableProgressable reporter, boolean allowCustom){
...
              FSDataInputStream stream = fs.open(path);
              // Note that zero-length file will fail to read PB magic, and attempt to create
              // a non-PB reader and fail the same way existing code expects it to. If we
get
              // rid of the old reader entirely, we need to handle 0-size files differently
from
              // merely non-PB files.
              byte[] magic = new byte[ProtobufLogReader.PB_WAL_MAGIC.length];
              boolean isPbWal = (stream.read(magic) == magic.length)                     
   <------- We encounter an exception here, catch it, but never close the stream
                  && Arrays.equals(magic, ProtobufLogReader.PB_WAL_MAGIC);

{noformat}

> Connection leak during log splitting
> ------------------------------------
>
>                 Key: HBASE-13601
>                 URL: https://issues.apache.org/jira/browse/HBASE-13601
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.98.10
>            Reporter: Abhishek Singh Chouhan
>            Assignee: Abhishek Singh Chouhan
>
> Ran into an issue where Region server died with the following exception
> {noformat}
> 2015-04-29 17:10:11,856 WARN  [nector@0.0.0.0:60030] mortbay.log - EXCEPTION
> java.io.IOException: Too many open files
>         at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>         at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
>         at org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
>         at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:686)
>         at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
>         at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
>         at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {noformat}
> Realized that all the tcp sockets on the system were used out due to the regionserver
trying to split the log and failing multiple times and leaving a connection open -
> {noformat}
> java.io.IOException: Got error for OP_READ_BLOCK, self=/10..99.3:50695, remote=/10.232.99.36:50010,
for file /hbase/WALs/host1,60020,1425930917890-splitting/host1%2C60020%2C1425930917890.1429358890944,
for pool BP-181199659-10.232.99.2-1411124363096 block 1074497051_756497
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:432)
>         at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:397)
>         at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:786)
>         at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:665)
>         at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:325)
>         at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:567)
>         at org.apache.hadoop.hdfs.DFSInputStream.seekToNewSource(DFSInputStream.java:1446)
>         at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:769)
>         at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:799)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:124)
>         at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>         at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
>         at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
>         at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>         at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>         at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>         at org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message