Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 0FD5B200C8B for ; Mon, 8 May 2017 08:21:09 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0E9AA160BC5; Mon, 8 May 2017 06:21:09 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 5426A160BB1 for ; Mon, 8 May 2017 08:21:08 +0200 (CEST) Received: (qmail 85509 invoked by uid 500); 8 May 2017 06:21:07 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 85498 invoked by uid 99); 8 May 2017 06:21:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 08 May 2017 06:21:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id E54171A0294 for ; Mon, 8 May 2017 06:21:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id R-nw3gL-S291 for ; Mon, 8 May 2017 06:21:05 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id C13195F523 for ; Mon, 8 May 2017 06:21:04 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 53714E01A8 for ; Mon, 8 May 2017 06:21:04 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 0D00321DF5 for ; Mon, 8 May 2017 06:21:04 +0000 (UTC) Date: Mon, 8 May 2017 06:21:04 +0000 (UTC) From: "Anoop Sam John (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-18002) Investigate why bucket cache filling up in file mode in an exisitng file is slower MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 08 May 2017 06:21:09 -0000 [ https://issues.apache.org/jira/browse/HBASE-18002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16000320#comment-16000320 ] Anoop Sam John commented on HBASE-18002: ---------------------------------------- I mean this. From SSD, the reads/writes happen as pages and update as blocks. See here http://codecapsule.com/2014/02/12/coding-for-ssds-part-6-a-summary-what-every-programmer-should-know-about-solid-state-drives/. When the file is already having data, we will overhead of writing new cache entries. When our blocks are not aligned with pages in SSD (mostly yes this way), we will end up have to rewrite more pages/blocks to add an HFile block entry! bq.bigger concern is that if there are lot of evictions and new blocks keeps getting filled up we may still end up with the same problem right? Hmm true.. Need tests! > Investigate why bucket cache filling up in file mode in an exisitng file is slower > ----------------------------------------------------------------------------------- > > Key: HBASE-18002 > URL: https://issues.apache.org/jira/browse/HBASE-18002 > Project: HBase > Issue Type: Sub-task > Components: BucketCache > Affects Versions: 2.0.0 > Reporter: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > > This issue was observed when we recently did some tests with SSD based bucket cache. Similar thing was also reported by @stack and [~danielpol] while doing some of these bucket cache related testing. > When we try to preload a bucket cache (in file mode) with a new file the bucket cache fills up quite faster and there not much 'failedBlockAdditions'. But when the same bucket cache is filled up with a preexisitng file ( that had already some entries filled up) this time it has more 'failedBlockAdditions' and the cache does not fill up faster. Investigate why this happens. -- This message was sent by Atlassian JIRA (v6.3.15#6346)