hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13778) BoundedByteBufferPool incorrectly increasing runningAverage buffer length
Date Thu, 28 May 2015 00:44:17 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562092#comment-14562092
] 

Hadoop QA commented on HBASE-13778:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12735713/HBASE-13778_V2.patch
  against master branch at commit 6529d8833d965fb201b9fe6927eed425bf31607f.
  ATTACHMENT ID: 12735713

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 5 new or modified
tests.

    {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions
(2.4.1 2.5.2 2.6.0)

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 protoc{color}.  The applied patch does not increase the total number of
protoc compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 checkstyle{color}.  The applied patch does not increase the total number
of checkstyle errors

    {color:green}+1 findbugs{color}.  The patch does not introduce any  new Findbugs (version
2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     

Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/14206//testReport/
Release Findbugs (version 2.0.3) 	warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/14206//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/14206//artifact/patchprocess/checkstyle-aggregate.html

  Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/14206//console

This message is automatically generated.

> BoundedByteBufferPool incorrectly increasing runningAverage buffer length
> -------------------------------------------------------------------------
>
>                 Key: HBASE-13778
>                 URL: https://issues.apache.org/jira/browse/HBASE-13778
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Anoop Sam John
>            Assignee: Anoop Sam John
>             Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
>         Attachments: HBASE-13778_V1.patch, HBASE-13778_V2.patch, HBASE-13778_V2.patch
>
>
> I was testing HBASE-13158 and noticed this. In BoundedByteBufferPool, we are having an
intial value for 'runningAverage' which defaults to 16K. So the pool will make initial buffers
of this size. This buffer may grow while used in ByteBufferOuputStream as the data has to
be written is getting more. On return back the BB to the pool, we try to adjust this 'runningAverage'
size by considering the capacity of the returned BB. We lack proper synchronization here and
this makes this runningAverage to grow till its max (1 MB)
> I am testing with PE tool with randomRead and having 20 client threads. So each get op
returns one cell with almost like 1KB data. The default size of the BB created by Pool is
16K and ideally there is no room for getting this resized.
> {code}
> 2015-05-26 20:12:21,965 INFO  [PriorityRpcServer.handler=5,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=2
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=2,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=4
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=8,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=6
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=9
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=13,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=10
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=18,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=12
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=16,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=14
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=5
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=19,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=15
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=4
> 2015-05-26 20:12:21,965 INFO  [PriorityRpcServer.handler=11,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=2
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=13
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=9,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=11
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=8
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=15,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=7
> 2015-05-26 20:12:21,966 INFO  [PriorityRpcServer.handler=4,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=16
> 2015-05-26 20:12:21,967 INFO  [PriorityRpcServer.handler=10,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=17
> 2015-05-26 20:12:21,967 INFO  [PriorityRpcServer.handler=14,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=19
> 2015-05-26 20:12:21,967 INFO  [PriorityRpcServer.handler=17,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=0, count=0, alloctions=18
> 2015-05-26 20:12:22,146 INFO  [PriorityRpcServer.handler=18,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=16384, totalCapacity=-16384, count=2, alloctions=20
> 2015-05-26 20:12:22,707 INFO  [PriorityRpcServer.handler=9,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=212992, totalCapacity=-32768, count=4, alloctions=21
> 2015-05-26 20:12:23,208 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=212992, totalCapacity=-81920, count=1, alloctions=22
> 2015-05-26 20:12:29,567 INFO  [PriorityRpcServer.handler=0,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=234837, totalCapacity=-49152, count=6, alloctions=23
> 2015-05-26 20:12:29,974 INFO  [PriorityRpcServer.handler=0,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=311296, totalCapacity=442368, count=6, alloctions=24
> 2015-05-26 20:12:31,356 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=606208, totalCapacity=1054037, count=9, alloctions=25
> 2015-05-26 20:12:31,894 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=901120, totalCapacity=742741, count=1, alloctions=26
> 2015-05-26 20:12:32,961 INFO  [PriorityRpcServer.handler=11,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=920234, totalCapacity=2479445, count=12, alloctions=27
> 2015-05-26 20:12:36,965 INFO  [PriorityRpcServer.handler=2,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1037653, totalCapacity=3836586, count=18, alloctions=28
> 2015-05-26 20:12:42,212 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1048120, totalCapacity=11203921, count=12, alloctions=29
> 2015-05-26 20:12:45,387 INFO  [PriorityRpcServer.handler=13,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1048302, totalCapacity=10174915, count=1, alloctions=30
> 2015-05-26 20:12:46,171 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1048302, totalCapacity=14177342, count=7, alloctions=31
> 2015-05-26 20:12:52,401 INFO  [PriorityRpcServer.handler=13,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1048495, totalCapacity=19454171, count=8, alloctions=32
> 2015-05-26 20:12:52,541 INFO  [PriorityRpcServer.handler=2,queue=0,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1048495, totalCapacity=16778574, count=1, alloctions=33
> 2015-05-26 20:12:56,631 INFO  [PriorityRpcServer.handler=5,queue=1,port=16020] io.BoundedByteBufferPool:
Allocated new BB runningAverage=1048495, totalCapacity=28925990, count=2, alloctions=34
> ...
> ..
> {code}
> getBuffer()  on 1st line removes the BB from the Queue. putBuffer() put it back 1st and
then in next line checks the size of the buffer. During this time period many other threads
might have taken buffers. 
> putBuffer(ByteBuffer bb)
> {code}
> 	int size = this.buffers.size(); // This size may be inexact.
> 	this.totalReservoirCapacity += bb.capacity();
> 	int average = 0;
> 	if (size != 0) {
> 	  average = this.totalReservoirCapacity / size;
> 	}
> 	if (average > this.runningAverage && average < this.maxByteBufferSizeToCache)
{
> 	  this.runningAverage = average;
> 	}
> {code}
> getBuffer() 
> {code}
> 	ByteBuffer bb = this.buffers.poll();
>     if (bb != null) {
>       // Clear sets limit == capacity.  Postion == 0.
>       bb.clear();
>       this.totalReservoirCapacity -= bb.capacity();
>     } 
> {code}
> totalReservoirCapacity might not have reduced while do above division for calc avg. 
From above log lines it is clear.
> As a result we will create much bigger sized buffers and we will not allow GC it as we
keep them in pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message