Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 24517200BB4 for ; Tue, 1 Nov 2016 08:26:06 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 22D7A160B02; Tue, 1 Nov 2016 07:26:06 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 4C52C160AE5 for ; Tue, 1 Nov 2016 08:26:05 +0100 (CET) Received: (qmail 84517 invoked by uid 500); 1 Nov 2016 07:25:59 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 84486 invoked by uid 99); 1 Nov 2016 07:25:59 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Nov 2016 07:25:59 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 155452C14F9 for ; Tue, 1 Nov 2016 07:25:59 +0000 (UTC) Date: Tue, 1 Nov 2016 07:25:59 +0000 (UTC) From: "Yu Li (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HBASE-16980) TestRowProcessorEndpoint failing consistently MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 01 Nov 2016 07:26:06 -0000 [ https://issues.apache.org/jira/browse/HBASE-16980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624614#comment-15624614 ] Yu Li edited comment on HBASE-16980 at 11/1/16 7:24 AM: -------------------------------------------------------- Ok, here comes the analysis. I could reproduce the failure in my local environment, but not consistently. {{testMultipleRows}} fails more frequently than {{testReadModifyWrite}}, and each time it fails, I could see {{RetriesExhaustedException}} caused by {{CallQueueTooBigException}}, like below: {noformat} Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=2, exceptions: Tue Nov 01 14:53:14 CST 2016, RpcRetryingCaller{globalStartTime=1477983194439, pause=100, retries=2}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on localhost,59045,1477983189616, too many items queued ? Tue Nov 01 14:53:14 CST 2016, RpcRetryingCaller{globalStartTime=1477983194439, pause=100, retries=2}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on localhost,59045,1477983189616, too many items queued ? at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:157) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) ... 6 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on localhost,59045,1477983189616, too many items queued ? at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:1) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136) ... 8 more com.google.protobuf.ServiceException: Error calling method RowProcessorService.Process at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:75) at org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$RowProcessorService$BlockingStub.process(RowProcessorProtos.java:1631) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint.swapRows(TestRowProcessorEndpoint.java:272) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint.access$3(TestRowProcessorEndpoint.java:265) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint$SwapRowsRunner.run(TestRowProcessorEndpoint.java:258) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint$1.run(TestRowProcessorEndpoint.java:225) at java.lang.Thread.run(Thread.java:745) {noformat} And when such exception happens, the design of the test cases could not assure the correctness, let's see them one by one: For {{testMultipleRows}}, it will launch 100 threads to swap two rows in parallel, and since the thread number is even, finally the two rows will not be swapped, but only if *all operations succeeds* or *even number of operations failed*. For {{testReadModifyWrite}} the similar reason, if any operation fails because of RetriesExhaustedException, the final check of {{assertEquals(numThreads + 1, finalCounter)}} will fail. Currently there's a {{failures}} counter but in either {{IncrementRunner}} or {{SwapRowsRunner}} we catch {{Throwable}} but never increase it... To improve the UT cases, we should 1) don't assert failures count to be zero; 2) count the failures for {{testReadModifyWrite}} 3) take {{swapped}} flag into account when assert result for {{testMultipleRows}} Regarding why HBASE-16195 makes the case failed more frequently, I've no much clue... It seems to me below change is relative {code} - this.chunkQueue.add(c); + if (chunkQueue != null && !this.closed && !this.chunkQueue.offer(c)) { + if (LOG.isTraceEnabled()) { + LOG.trace("Chunk queue is full, won't reuse this new chunk. Current queue size: " + + chunkQueue.size()); + } + } {code} After HBASE-16195 it won't add the chunk into {{chunkQueue}} anymore, so it seems the {{chunkQueue!=null}} check is more expensive than {{this.chunkQueue.add(c)}}? Unlikely by theory though, right?... Anyway I believe this is some UT case design issue and not that relative to HBASE-16195 change. Will upload a patch soon to reinforce the UT. [~apurtell] and [~busbey] please let me know your thoughts. Thanks. Assigning the issue to myself, btw. was (Author: carp84): Ok, here comes the analysis. I could reproduce the failure in my local environment, but not consistently. {{testMultipleRows}} fails more frequently than {{testReadModifyWrite}}, and each time it fails, I could see {{RetriesExhaustedException}} caused by {{CallQueueTooBigException}}, like below: {noformat} Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=2, exceptions: Tue Nov 01 14:53:14 CST 2016, RpcRetryingCaller{globalStartTime=1477983194439, pause=100, retries=2}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on localhost,59045,1477983189616, too many items queued ? Tue Nov 01 14:53:14 CST 2016, RpcRetryingCaller{globalStartTime=1477983194439, pause=100, retries=2}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on localhost,59045,1477983189616, too many items queued ? at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:157) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) ... 6 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on localhost,59045,1477983189616, too many items queued ? at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:1) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136) ... 8 more com.google.protobuf.ServiceException: Error calling method RowProcessorService.Process at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:75) at org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$RowProcessorService$BlockingStub.process(RowProcessorProtos.java:1631) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint.swapRows(TestRowProcessorEndpoint.java:272) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint.access$3(TestRowProcessorEndpoint.java:265) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint$SwapRowsRunner.run(TestRowProcessorEndpoint.java:258) at org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint$1.run(TestRowProcessorEndpoint.java:225) at java.lang.Thread.run(Thread.java:745) {noformat} And when such exception happens, the design of the test cases could not assure the correctness, let's see them one by one: For {{testMultipleRows}}, it will launch 100 threads to swap two rows in parallel, and since the thread number is even, finally the two rows will not be swapped, but only if *all operations succeeds* or *even number of operations failed*. For {{testReadModifyWrite}} the similar reason, if any operation fails because of RetriesExhaustedException, the final check of {{assertEquals(numThreads + 1, finalCounter)}} will fail. Currently there's a {{failures}} counter but in either {{IncrementRunner}} or {{SwapRowsRunner}} we catch {{Throwable}} but never increase it... To enforce the UT cases, we should 1) don't assert failures count to be zero; 2) count the failures for {{testReadModifyWrite}} 3) take {{swapped}} flag into account when assert result for {{testMultipleRows}} Regarding why HBASE-16195 makes the case failed more frequently, I've no much clue... It seems to me below change is relative {code} - this.chunkQueue.add(c); + if (chunkQueue != null && !this.closed && !this.chunkQueue.offer(c)) { + if (LOG.isTraceEnabled()) { + LOG.trace("Chunk queue is full, won't reuse this new chunk. Current queue size: " + + chunkQueue.size()); + } + } {code} After HBASE-16195 it won't add the chunk into {{chunkQueue}} anymore, so it seems the {{chunkQueue!=null}} check is more expensive than {{this.chunkQueue.add(c)}}? Unlikely by theory though, right?... Anyway I believe this is some UT case design issue and not that relative to HBASE-16195 change. Will upload a patch soon to reinforce the UT. [~apurtell] and [~busbey] please let me know your thoughts. Thanks. Assigning the issue to myself, btw. > TestRowProcessorEndpoint failing consistently > --------------------------------------------- > > Key: HBASE-16980 > URL: https://issues.apache.org/jira/browse/HBASE-16980 > Project: HBase > Issue Type: Bug > Affects Versions: 1.2.4 > Reporter: Andrew Purtell > Assignee: Yu Li > > Found while evaluating 1.2.4 RC1 > {noformat} > TestRowProcessorEndpoint.testMultipleRows:246 expected:<3> but was:<2> > TestRowProcessorEndpoint.testReadModifyWrite:184 expected:<101> but was:<91> > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)