Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 90F3110CB1 for ; Fri, 25 Oct 2013 18:29:08 +0000 (UTC) Received: (qmail 18673 invoked by uid 500); 25 Oct 2013 18:28:51 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 18508 invoked by uid 500); 25 Oct 2013 18:28:43 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 18433 invoked by uid 99); 25 Oct 2013 18:28:41 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 25 Oct 2013 18:28:41 +0000 Date: Fri, 25 Oct 2013 18:28:41 +0000 (UTC) From: "Jean-Daniel Cryans (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-9840) Large scans and BlockCache evictions problems MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805563#comment-13805563 ] Jean-Daniel Cryans commented on HBASE-9840: ------------------------------------------- bq. When all three buckets become full all new blocks are inserted at 0.25 split in a queue Yes, but like I am saying 2Q has its own configs that you can't change at runtime and you will hit another edgy use case that isn't able to use all the cache. > Large scans and BlockCache evictions problems > --------------------------------------------- > > Key: HBASE-9840 > URL: https://issues.apache.org/jira/browse/HBASE-9840 > Project: HBase > Issue Type: Bug > Reporter: Lars Hofhansl > > I just ran into a scenario that baffled me first, but after some reflection makes sense. I ran a very large scan that filled up most of the block cache with my scan's data. I ran that scan a few times. > That I ran a much smaller scan, and this scan will never get all its blocks cached if it does not fit entirely into the remaining BlockCache; regardless how I often I run it! > The reason is that the blocks of the first large scan were all promoted. Since the 2nd scan did not fully fit into the cache all blocks are round-robin evicted as I rerun the scan. Thus those blocks will never get accessed more than once before they get evicted again. > Since promoted blocks are not demoted the large scan's block will never be evicted unless we have another small enough scan/get that can promote its blocks. > Not sure what the proper solution is, but it seems only a LRU cache that can expire blocks over time would solve this. > Granted, this is a pretty special case. > Edit: My usual spelling digressions. -- This message was sent by Atlassian JIRA (v6.1#6144)