Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 520CB10339 for ; Thu, 19 Sep 2013 05:46:55 +0000 (UTC) Received: (qmail 74717 invoked by uid 500); 19 Sep 2013 05:46:12 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 74619 invoked by uid 500); 19 Sep 2013 05:46:06 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 74522 invoked by uid 99); 19 Sep 2013 05:46:03 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Sep 2013 05:46:03 +0000 Date: Thu, 19 Sep 2013 05:46:03 +0000 (UTC) From: "stack (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-9535) Try a pool of direct byte buffers handling incoming ipc requests MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-9535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771609#comment-13771609 ] stack commented on HBASE-9535: ------------------------------ Thanks for doing a bit of math [~xieliang007] I'd say 162 bytes is unusually small for a request as is the ycsb workload where queries are all the same size. What if the requests were bigger, say 1-100k and we were doing 2-5k requests a second? Lets say 5k * 10k for easy calc. That'd be 50MB a second in the YG not counting any other creations. Some of these items could get promoted if there were pressure in YG. What if server was getting up to 1MB cells and we were doing 1k or more hits a second. The alternative is that these items would not go through the YG at all. How long did you run your test for? How long were the ygcs each second? I'd still be interested in trying this out since seems easy enough to do. Thanks boss. > Try a pool of direct byte buffers handling incoming ipc requests > ---------------------------------------------------------------- > > Key: HBASE-9535 > URL: https://issues.apache.org/jira/browse/HBASE-9535 > Project: HBase > Issue Type: Brainstorming > Reporter: stack > Assignee: stack > > ipc takes in a query by allocating a ByteBuffer of the size of the request and then reading off the socket into this on-heap BB. > Experiment with keeping a pool of BBs so we have some buffer reuse to cut on garbage generated. Could checkout from pool in RpcServer#Reader. Could check back into the pool when Handler is done just before it queues the response on the Responder's queue. We should be good since, at least for now, kvs get copied up into MSLAB (not references) when data gets stuffed into MemStore; this should make it so no references left over when we check the BB back into the pool for use next time around. > If on-heap BBs work, we could then try direct BBs (Allocation of DBBs takes time so if already allocated, should be good. GC of DBBs is a pain but if in a pool, we shouldn't be wanting this to happen). The copy from socket to the DBB will be off-heap (should be fast). > Could start w/ the HDFS DirectBufferPool. It is unbounded and keeps items by size (we might want to bypass the pool if an object is > size N). > DBBs for this task would contend w/ offheap BBs used in BlockReadLocal when short-circuit reading. It'd be a bummer if we had to allocate big objects on-heap. Would still be an improvement. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira