Return-Path: X-Original-To: apmail-accumulo-notifications-archive@minotaur.apache.org Delivered-To: apmail-accumulo-notifications-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 325AA190DE for ; Sat, 23 Apr 2016 07:54:13 +0000 (UTC) Received: (qmail 48626 invoked by uid 500); 23 Apr 2016 07:54:13 -0000 Delivered-To: apmail-accumulo-notifications-archive@accumulo.apache.org Received: (qmail 48582 invoked by uid 500); 23 Apr 2016 07:54:13 -0000 Mailing-List: contact notifications-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@apache.org Delivered-To: mailing list notifications@accumulo.apache.org Received: (qmail 48453 invoked by uid 99); 23 Apr 2016 07:54:13 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 23 Apr 2016 07:54:12 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id DA5A92C1F5C for ; Sat, 23 Apr 2016 07:54:12 +0000 (UTC) Date: Sat, 23 Apr 2016 07:54:12 +0000 (UTC) From: "Dylan Hutchison (JIRA)" To: notifications@accumulo.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (ACCUMULO-4229) BatchWriter writes to old, closed tablets leading to degraded write rates MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/ACCUMULO-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255193#comment-15255193 ] Dylan Hutchison commented on ACCUMULO-4229: ------------------------------------------- In my case, the tserver is also the client. I do evil things like open BatchWriters inside iterators. So if the cache is indeed the problem, then it would only affect clients that live within the same JVM as the tserver. This is good news because not every client would see this problem. I'm not sure if this covers William's MapReduce case. > BatchWriter writes to old, closed tablets leading to degraded write rates > ------------------------------------------------------------------------- > > Key: ACCUMULO-4229 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4229 > Project: Accumulo > Issue Type: Bug > Components: client > Affects Versions: 1.7.1 > Reporter: Dylan Hutchison > > BatchWriters that run a long time have write rates that sometimes mysteriously decrease after the table it is writing to goes through a major compaction or a split. The decrease can be as bad as reducing throughput to 0. > This was first first mentioned in this [email thread|https://mail-archives.apache.org/mod_mbox/accumulo-user/201406.mbox/%3CCAMz+DuvmmHegOn9EJeHR9H_rRpP50L2QZ53BbdruVO0pirArQw@mail.gmail.com%3E] for major compactions. > I discovered this in this [email thread|https://mail-archives.apache.org/mod_mbox/accumulo-dev/201604.mbox/%3CCAPx%3DJkaY7fVh-U0O%2Bysx2d98LOGMcA4oEQOYgoPxR-0em4hdvg%40mail.gmail.com%3E] for splits. See the thread for some log messages. > I turned on TRACE logs and I think I pinned it down: the TabletLocator cached by a BatchWriter gets out of sync with the static cache of TabletLocators. > # The TabletServerBatchWriter caches a TabletLocator from the static collection of TabletLocators when it starts writing. Suppose it is writing to tablet T1. > # The TabletServerBatchWriter uses its locally cached TabletLocator inside its `binMutations` method for its entire lifespan; this cache is never refreshed or updated to sync up with the static collection of TabletLocators. > # Every hour, the static collection of TabletLocators clears itself. The next call to get a TabletLocator from the static collection allocates a new TabletLocator. Unfortunately, the TabletServerBatchWriter does not reflect this change and continues to use the old, locally cached TabletLocator. > # Tablet T1 splits into T2 and T3, which closes T1. As such, it no longer exists and the tablet server that receives the entries meant to go to T1 all fail to write because T1 is closed. > # The TabletServerBatchWriter receives the response from the tablet server that all entries failed to write. It invalidates the cache of the *new* TabletLocator obtained from the static collection of TabletLocators. The old TabletLocator that is cached locally does not get invalidated. > # The TabletServerBatchWriter re-queues the failed entries and tries to write them to the same closed tablet T1, because it is still looking up tablets using the old TabletLocator. > This behavior subsumes the circumstances William wrote about in the thread he mentioned. The problem would occur as a result of either splits or major compactions. It would only stop the BatchWriter if its entire memory filled up with writes to the same tablet that was closed as a result of a majc or split; otherwise it would just slow down the BatchWriter by failing to write some number of entries with every RPC. > There are a few solutions we can think of. > # Not have the MutationWriter inside the TabletServerBatchWriter locally cache TabletLocators. I suspect this was done for performance reasons, so it's probably not a good solution. > # Have all the MutationWriters clear their cache at the same time the static TabletLocator cache clears. I like this one. We could store a reference to the Map that each MutationWriter has inside a static synchronized WeakHashMap. The only time the weak map needs to be accessed is: > ## When a MutationWriter is constructed (from constructing a TabletServerBatchWriter), add its new local TabletLocator cache to the weak map. > ## When the static TabletLocator cache is cleared, also clear every map in the weak map. > # Another solution is to make the invalidate calls on the local TabletLocator cache rather than the global static one. If we go this route we should double check the idea to make sure it does not impact the correctness of any other pieces of code that use the cache. I like the previous idea better. > The TimeoutTabletLocator does not help when no timeout is set on the BatchWriter (the default behavior). -- This message was sent by Atlassian JIRA (v6.3.4#6332)