Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 1F5A9200CB6 for ; Thu, 25 May 2017 05:09:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 1E04C160BD0; Thu, 25 May 2017 03:09:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 5F4D1160BD7 for ; Thu, 25 May 2017 05:09:09 +0200 (CEST) Received: (qmail 70438 invoked by uid 500); 25 May 2017 03:09:08 -0000 Mailing-List: contact reviews-help@impala.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@impala.incubator.apache.org Received: (qmail 70329 invoked by uid 99); 25 May 2017 03:09:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 May 2017 03:09:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id DF9701AFDC9 for ; Thu, 25 May 2017 02:55:39 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.363 X-Spam-Level: X-Spam-Status: No, score=0.363 tagged_above=-999 required=6.31 tests=[RDNS_DYNAMIC=0.363, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id k6PLny9Zq5Mh for ; Thu, 25 May 2017 02:55:38 +0000 (UTC) Received: from ip-10-146-233-104.ec2.internal (ec2-75-101-130-251.compute-1.amazonaws.com [75.101.130.251]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 7A61A5FCCD for ; Thu, 25 May 2017 02:55:38 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by ip-10-146-233-104.ec2.internal (8.14.4/8.14.4) with ESMTP id v4P2tbMi004843; Thu, 25 May 2017 02:55:37 GMT Message-Id: <201705250255.v4P2tbMi004843@ip-10-146-233-104.ec2.internal> Date: Thu, 25 May 2017 02:55:37 +0000 From: "Impala Public Jenkins (Code Review)" To: Tim Armstrong , impala-cr@cloudera.com, reviews@impala.incubator.apache.org X-Gerrit-MessageType: merged Subject: =?UTF-8?Q?=5BImpala-ASF-CR=5D_IMPALA-4923=3A_reduce_memory_transfer_for_selective_scans=0A?= X-Gerrit-Change-Id: I3773dc63c498e295a2c1386a15c5e69205e747ea X-Gerrit-ChangeURL: X-Gerrit-Commit: b4343895d8906d66e5777edab080aacc0eeb3717 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Content-Disposition: inline User-Agent: Gerrit/2.12.7 archived-at: Thu, 25 May 2017 03:09:10 -0000 Impala Public Jenkins has submitted this change and it was merged. Change subject: IMPALA-4923: reduce memory transfer for selective scans ...................................................................... IMPALA-4923: reduce memory transfer for selective scans Most of the code changes are to restructure things so that the scratch batch's tuple buffer is stored in a separate MemPool from auxiliary memory such as decompression buffers. This part of the change does not change the behaviour of the scanner in itself, but allows us to recycle the tuple buffer without holding onto unused auxiliary memory. The optimisation is implemented in TryCompact(): if enough rows were filtered out during the copy from the scratch batch to the output batch, the fixed-length portions of the surviving rows (if any) are copied to a new, smaller, buffer, and the original, larger, buffer is reused for the next scratch batch. Previously the large buffer was always attached to the output batch, so a large buffer was transferred between threads for every scratch batch processed. In combination with the decompression buffer change in IMPALA-5304, this means that in many cases selective scans don't produce nearly as many empty or near-empty batches and do not attach nearly as much memory to each batch. Performance: Even on an 8 core machine I see some speedup on selective scans. Profiling with "perf top" also showed that time in TCMalloc was reduced - it went from several % of CPU time to a minimal amount. Running TPC-H on the same machine showed a ~5% overall improvement and no regressions. E.g. Q6 got 20-25% faster. I hope to do some additional cluster benchmarking on systems with more cores to verify that the severe performance problems there are fixed, but in the meantime it seems like we have enough evidence that it will at least improve things. Testing: Add a couple of selective scans that exercise the new code paths. Change-Id: I3773dc63c498e295a2c1386a15c5e69205e747ea Reviewed-on: http://gerrit.cloudera.org:8080/6949 Reviewed-by: Tim Armstrong Tested-by: Impala Public Jenkins --- M be/src/exec/hdfs-parquet-scanner.cc M be/src/exec/parquet-column-readers.cc M be/src/exec/parquet-scratch-tuple-batch.h M be/src/runtime/row-batch.cc M be/src/runtime/row-batch.h M testdata/workloads/functional-query/queries/QueryTest/parquet.test 6 files changed, 212 insertions(+), 60 deletions(-) Approvals: Impala Public Jenkins: Verified Tim Armstrong: Looks good to me, approved -- To view, visit http://gerrit.cloudera.org:8080/6949 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-MessageType: merged Gerrit-Change-Id: I3773dc63c498e295a2c1386a15c5e69205e747ea Gerrit-PatchSet: 9 Gerrit-Project: Impala-ASF Gerrit-Branch: master Gerrit-Owner: Tim Armstrong Gerrit-Reviewer: Alex Behm Gerrit-Reviewer: Dan Hecht Gerrit-Reviewer: Impala Public Jenkins Gerrit-Reviewer: Michael Ho Gerrit-Reviewer: Mostafa Mokhtar Gerrit-Reviewer: Tim Armstrong