From dev-return-32575-archive-asf-public=cust-asf.ponee.io@ignite.apache.org Mon Mar 26 19:04:30 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 9DD80180649 for ; Mon, 26 Mar 2018 19:04:29 +0200 (CEST) Received: (qmail 97428 invoked by uid 500); 26 Mar 2018 17:04:28 -0000 Mailing-List: contact dev-help@ignite.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ignite.apache.org Delivered-To: mailing list dev@ignite.apache.org Received: (qmail 97417 invoked by uid 99); 26 Mar 2018 17:04:28 -0000 Received: from mail-relay.apache.org (HELO mailrelay2-lw-us.apache.org) (207.244.88.137) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 26 Mar 2018 17:04:28 +0000 Received: from mail-qt0-f170.google.com (mail-qt0-f170.google.com [209.85.216.170]) by mailrelay2-lw-us.apache.org (ASF Mail Server at mailrelay2-lw-us.apache.org) with ESMTPSA id B04791C2E for ; Mon, 26 Mar 2018 17:04:27 +0000 (UTC) Received: by mail-qt0-f170.google.com with SMTP id h4so20308075qtn.13 for ; Mon, 26 Mar 2018 10:04:27 -0700 (PDT) X-Gm-Message-State: AElRT7Hu31ilCWe7oxoRYj804HzaHbDkPixecugRRIC3VQO7CDSB6mgr Vc1+MsKbl0enlbddpvKX+LDlXquGvpG+amW3oHY= X-Google-Smtp-Source: AG47ELs+R2/214Zo9T0nexON5V14SZ570zn/M+lKrItyyKo9UAAxddDpw6ktn83Ewh1B4HblGqfpPevCuZThCcv/K60= X-Received: by 10.237.38.101 with SMTP id z92mr55060136qtc.303.1522083867442; Mon, 26 Mar 2018 10:04:27 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.229.201 with HTTP; Mon, 26 Mar 2018 10:04:26 -0700 (PDT) In-Reply-To: References: From: Anton Vinogradov Date: Mon, 26 Mar 2018 20:04:26 +0300 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Rebalancing - how to make it faster To: dev@ignite.apache.org Content-Type: multipart/alternative; boundary="94eb2c1229283cc304056853c4c0" --94eb2c1229283cc304056853c4c0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable >> It is impossible to disable WAL only for certain partitions without >> completely overhauling design of Ignite storage mechanism. Right now we can >> afford only to change WAL mode per cache group. Cache group rebalancing is a one cache rebalancing, and then this cache ("cache group") can be presented as a set of virtual caches. So, there is no issues for initial rebalancing. Lets disable WAL on initial rebalancing. 2018-03-26 16:46 GMT+03:00 Ilya Lantukh : > Dmitry, > It is impossible to disable WAL only for certain partitions without > completely overhauling design of Ignite storage mechanism. Right now we c= an > afford only to change WAL mode per cache group. > > The idea is to disable WAL when node doesn't have any partition in OWNING > state, which means it doesn't have any consistent data and won't be able = to > restore from WAL anyway. I don't see any potential use for WAL on such > node, but we can keep a configurable parameter indicating can we > automatically disable WAL in such case or not. > > On Fri, Mar 23, 2018 at 10:40 PM, Dmitry Pavlov > wrote: > > > Denis, as I understood, there is and idea to exclude only rebalanced > > partition(s) data. All other data will go to the WAL. > > > > Ilya, please correct me if I'm wrong. > > > > =D0=BF=D1=82, 23 =D0=BC=D0=B0=D1=80. 2018 =D0=B3. =D0=B2 22:15, Denis M= agda : > > > > > Ilya, > > > > > > That's a decent boost (5-20%) even having WAL enabled. Not sure that = we > > > should stake on the WAL "off" mode here because if the whole cluster > goes > > > down, it's then the data consistency is questionable. As an architect= , > I > > > wouldn't disable WAL for the sake of rebalancing; it's too risky. > > > > > > If you agree, then let's create the IEP. This way it will be easier t= o > > > track this endeavor. BTW, are you already ready to release any > > > optimizations in 2.5 that is being discussed in a separate thread? > > > > > > -- > > > Denis > > > > > > > > > > > > On Fri, Mar 23, 2018 at 6:37 AM, Ilya Lantukh > > > wrote: > > > > > > > Denis, > > > > > > > > > - Don't you want to aggregate the tickets under an IEP? > > > > Yes, I think so. > > > > > > > > > - Does it mean we're going to update our B+Tree implementation? A= ny > > > ideas > > > > how risky it is? > > > > One of tickets that I created ( > > > > https://issues.apache.org/jira/browse/IGNITE-7935) involves B+Tree > > > > modification, but I am not planning to do it in the nearest future. > It > > > > shouldn't affect existing tree operations, only introduce new ones > > > (putAll, > > > > invokeAll, removeAll). > > > > > > > > > - Any chance you had a prototype that shows performance > optimizations > > > the > > > > approach you are suggesting to take? > > > > I have a prototype for simplest improvements ( > > https://issues.apache.org/ > > > > jira/browse/IGNITE-8019 & https://issues.apache.org/ > > > > jira/browse/IGNITE-8018) > > > > - together they increase throughput by 5-20%, depending on > > configuration > > > > and environment. Also, I've tested different WAL modes - switching > from > > > > LOG_ONLY to NONE gives over 100% boost - this is what I expect from > > > > https://issues.apache.org/jira/browse/IGNITE-8017. > > > > > > > > On Thu, Mar 22, 2018 at 9:48 PM, Denis Magda > > wrote: > > > > > > > > > Ilya, > > > > > > > > > > That's outstanding research and summary. Thanks for spending your > > time > > > on > > > > > this. > > > > > > > > > > Not sure I have enough expertise to challenge your approach, but = it > > > > sounds > > > > > 100% reasonable to me. As side notes: > > > > > > > > > > - Don't you want to aggregate the tickets under an IEP? > > > > > - Does it mean we're going to update our B+Tree implementation= ? > > Any > > > > > ideas how risky it is? > > > > > - Any chance you had a prototype that shows performance > > > optimizations > > > > of > > > > > the approach you are suggesting to take? > > > > > > > > > > -- > > > > > Denis > > > > > > > > > > On Thu, Mar 22, 2018 at 8:38 AM, Ilya Lantukh < > ilantukh@gridgain.com > > > > > > > > wrote: > > > > > > > > > > > Igniters, > > > > > > > > > > > > I've spent some time analyzing performance of rebalancing > process. > > > The > > > > > > initial goal was to understand, what limits it's throughput, > > because > > > it > > > > > is > > > > > > significantly slower than network and storage device can > > > theoretically > > > > > > handle. > > > > > > > > > > > > Turns out, our current implementation has a number of issues > caused > > > by > > > > a > > > > > > single fundamental problem. > > > > > > > > > > > > During rebalance data is sent in batches called > > > > > > GridDhtPartitionSupplyMessages. Batch size is configurable, > > default > > > > > value > > > > > > is 512KB, which could mean thousands of key-value pairs. Howeve= r, > > we > > > > > don't > > > > > > take any advantage over this fact and process each entry > > > independently: > > > > > > - checkpointReadLock is acquired multiple times for every entry= , > > > > leading > > > > > to > > > > > > unnecessary contention - this is clearly a bug; > > > > > > - for each entry we write (and fsync, if configuration assumes > it) > > a > > > > > > separate WAL record - so, if batch contains N entries, we might > end > > > up > > > > > > doing N fsyncs; > > > > > > - adding every entry into CacheDataStore also happens completel= y > > > > > > independently. It means, we will traverse and modify each index > > tree > > > N > > > > > > times, we will allocate space in FreeList N times and we will > have > > to > > > > > > additionally store in WAL O(N*log(N)) page delta records. > > > > > > > > > > > > I've created a few tickets in JIRA with very different levels o= f > > > scale > > > > > and > > > > > > complexity. > > > > > > > > > > > > Ways to reduce impact of independent processing: > > > > > > - https://issues.apache.org/jira/browse/IGNITE-8019 - > > aforementioned > > > > > bug, > > > > > > causing contention on checkpointReadLock; > > > > > > - https://issues.apache.org/jira/browse/IGNITE-8018 - > inefficiency > > > in > > > > > > GridCacheMapEntry implementation; > > > > > > - https://issues.apache.org/jira/browse/IGNITE-8017 - > > automatically > > > > > > disable > > > > > > WAL during preloading. > > > > > > > > > > > > Ways to solve problem on more global level: > > > > > > - https://issues.apache.org/jira/browse/IGNITE-7935 - a ticket > to > > > > > > introduce > > > > > > batch modification; > > > > > > - https://issues.apache.org/jira/browse/IGNITE-8020 - complete > > > > redesign > > > > > of > > > > > > rebalancing process for persistent caches, based on file > transfer. > > > > > > > > > > > > Everyone is welcome to criticize above ideas, suggest new ones = or > > > > > > participate in implementation. > > > > > > > > > > > > -- > > > > > > Best regards, > > > > > > Ilya > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Best regards, > > > > Ilya > > > > > > > > > > > > > -- > Best regards, > Ilya > --94eb2c1229283cc304056853c4c0--