Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id C70BD200CB0 for ; Fri, 9 Jun 2017 01:35:23 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id C5A69160BD5; Thu, 8 Jun 2017 23:35:23 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 18A25160BE5 for ; Fri, 9 Jun 2017 01:35:22 +0200 (CEST) Received: (qmail 14287 invoked by uid 500); 8 Jun 2017 23:35:22 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 14270 invoked by uid 99); 8 Jun 2017 23:35:22 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Jun 2017 23:35:22 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 7EDA3C028B for ; Thu, 8 Jun 2017 23:35:21 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id DpqbsUz4HhJB for ; Thu, 8 Jun 2017 23:35:20 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id EB9B25FC3D for ; Thu, 8 Jun 2017 23:35:19 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 09FE0E0DA4 for ; Thu, 8 Jun 2017 23:35:19 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 5259E21E14 for ; Thu, 8 Jun 2017 23:35:18 +0000 (UTC) Date: Thu, 8 Jun 2017 23:35:18 +0000 (UTC) From: "Andrew Purtell (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 08 Jun 2017 23:35:24 -0000 [ https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16043637#comment-16043637 ] Andrew Purtell commented on HBASE-18027: ---------------------------------------- [~ashu210890]. No good reason. Done. > Replication should respect RPC size limits when batching edits > -------------------------------------------------------------- > > Key: HBASE-18027 > URL: https://issues.apache.org/jira/browse/HBASE-18027 > Project: HBase > Issue Type: Bug > Components: Replication > Affects Versions: 2.0.0, 1.4.0, 1.3.1 > Reporter: Andrew Purtell > Assignee: Andrew Purtell > Fix For: 2.0.0, 1.4.0, 1.3.2 > > Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch > > > In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in batches. We create N lists. N is the minimum of configured replicator threads, number of 100-waledit batches, or number of current sinks. Every pending entry in the replication context is then placed in order by hash of encoded region name into one of these N lists. Each of the N lists is then sent all at once in one replication RPC. We do not test if the sum of data in each N list will exceed RPC size limits. This code presumes each individual edit is reasonably small. Not checking for aggregate size while assembling the lists into RPCs is an oversight and can lead to replication failure when that assumption is violated. > We can fix this by generating as many replication RPC calls as we need to drain a list, keeping each RPC under limit, instead of assuming the whole list will fit in one. -- This message was sent by Atlassian JIRA (v6.3.15#6346)