Return-Path: X-Original-To: apmail-drill-dev-archive@www.apache.org Delivered-To: apmail-drill-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CF7DA180DE for ; Sat, 21 Nov 2015 00:26:29 +0000 (UTC) Received: (qmail 31907 invoked by uid 500); 21 Nov 2015 00:26:29 -0000 Delivered-To: apmail-drill-dev-archive@drill.apache.org Received: (qmail 31855 invoked by uid 500); 21 Nov 2015 00:26:29 -0000 Mailing-List: contact dev-help@drill.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@drill.apache.org Delivered-To: mailing list dev@drill.apache.org Received: (qmail 31842 invoked by uid 99); 21 Nov 2015 00:26:29 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Nov 2015 00:26:29 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id DCA5DC15C1 for ; Sat, 21 Nov 2015 00:26:28 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.921 X-Spam-Level: ** X-Spam-Status: No, score=2.921 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, T_KAM_HTML_FONT_INVALID=0.01, T_REMOTE_IMAGE=0.01, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=maprtech.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id l1SMTKtfuEW2 for ; Sat, 21 Nov 2015 00:26:14 +0000 (UTC) Received: from mail-wm0-f42.google.com (mail-wm0-f42.google.com [74.125.82.42]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 0B97C441BA for ; Sat, 21 Nov 2015 00:26:14 +0000 (UTC) Received: by wmww144 with SMTP id w144so40656376wmw.0 for ; Fri, 20 Nov 2015 16:26:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=maprtech.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=9kjFz7oFb02F8LCLywKZkpGq9QaJrJZBgpLh+t9K4Mc=; b=Gv7PRzv1iRiK9YiRZtIXvv2CA58kNWNHfrmqwiiYpH5IMkFV9JF7rswpRjTI6L2KVT IpYsxrhQ4gjMclnU4DQ/rLh8jHvu7Yb7GnSKoGg/4if89Logy0e45zAUBw5mspWw0I48 qglKIjEZSkgGv3kc6NkSC1eGZM2weGnVa62FI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=9kjFz7oFb02F8LCLywKZkpGq9QaJrJZBgpLh+t9K4Mc=; b=BODNtc+kdYi3ONzYeRKiVyjW/EJ8qO0srS6ml6WMrSpfiXqi8sbn2MjRzqbSsXcmMP em0nhCSfAWzFukK1W5v7ngbVVZG5RdJvX15cYpiRV5LZWAvVPGgTD50XNdou1f+LIzLn v1WUKXUSn5+SOXfVTQrZBQU+5eQFGpc4w7C8odk/GFJN+qzJ3DkmiU8qBlQLQmzq3vyR vQ4yZ36xHp3R6RWjJHspAwQQJm684sIlGDNWr0RKLEtLJqSK4SOwuAlRZLyEte0B5xWO T+fRoXwXbvJZhKyXfi98AV9VmadLJGwH5hj2qE/SmbaQVwZjM04zGbQyFAqZ5ZRHLS2c 7dgA== X-Gm-Message-State: ALoCoQn74y6+K2tyGYgKruSET1br9ltoOTuFNv9b/QQ3qRhKnh+pM772Lhx1bjgNVpOEqQ8ren/i MIME-Version: 1.0 X-Received: by 10.28.13.138 with SMTP id 132mr5560364wmn.62.1448065573056; Fri, 20 Nov 2015 16:26:13 -0800 (PST) Received: by 10.28.139.193 with HTTP; Fri, 20 Nov 2015 16:26:12 -0800 (PST) In-Reply-To: References: Date: Fri, 20 Nov 2015 16:26:12 -0800 Message-ID: Subject: Re: ExternalSort doesn't properly account for sliced buffers From: Abdel Hakim Deneche To: "dev@drill.apache.org" Content-Type: multipart/alternative; boundary=001a11444c12180aab0525020a04 --001a11444c12180aab0525020a04 Content-Type: text/plain; charset=UTF-8 I'm confused here. Looking at DrillBuf.capacity() it already returns the length of the slice: @Override > public int capacity() { > return length; > } On Fri, Nov 20, 2015 at 4:11 PM, Hanifi GUNES wrote: > Seems like I am missing some input here. Are we talking about changing the > behavior of DrillBuf#capacity()? If so, I would +1 the following: > > *- It seems like capacity should return the length of the slice (since > I believe that fits with the general ByteBuf interface). If I > reset writerIndex(0), I should be able write to capacity() without issue.* > > 2015-11-20 16:03 GMT-08:00 Jacques Nadeau : > > > My gut is that any changes other than correct accounting will have little > > impact on real world situations. Do you think the change will make enough > > difference to be valuable? > > > > It seems like capacity should return the length of the slice (since I > > believe that fits with the general ByteBuf interface). If I reset > > writerIndex(0), I should be able write to capacity() without issue. Seems > > weird to return some other (mostly disconnect) value. > > > > > > > > -- > > Jacques Nadeau > > CTO and Co-Founder, Dremio > > > > On Fri, Nov 20, 2015 at 3:28 PM, Abdel Hakim Deneche < > > adeneche@maprtech.com> > > wrote: > > > > > @Steven, I think DrillBuf.capacity() was changed at some point, I was > > > looking at the code and it seems to only return the size of the "slice" > > and > > > not the root buffer. > > > > > > While waiting for the new allocator and transfer of ownership, would it > > > make sense to remove the check for root buffers like this ? > > > > > > private long getBufferSize(VectorAccessible batch) { > > > > long size = 0; > > > > for (VectorWrapper w : batch) { > > > > DrillBuf[] bufs = w.getValueVector().getBuffers(false); > > > > for (DrillBuf buf : bufs) { > > > > size += buf.capacity(); > > > > } > > > > } > > > > return size; > > > > } > > > > > > > > > > > > On Fri, Nov 20, 2015 at 2:50 PM, Hanifi GUNES > > > wrote: > > > > > > > Problem with the above code is that not all vectors operate on root > > > buffers > > > > rendering the accounting above inaccurate. In fact your example is > one > > > > perfect instance where vectors would point to non-root buffers for > sure > > > > because of the slicing taking place at RecordBatchLoader#load [1] > > > > > > > > > > > > 1: > > > > > > > > > > > > > > https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchLoader.java#L117 > > > > > > > > 2015-11-20 11:41 GMT-08:00 Steven Phillips : > > > > > > > > > I think it is because we can't actually properly account for sliced > > > > > buffers. I don't remember for sure, but I think it might be because > > > > calling > > > > > buf.capacity() on a sliced buffer returns the the capacity of root > > > > buffer, > > > > > not the size of the slice. That may not be correct, but I think it > > was > > > > > something like that. Whatever it is, I am pretty sure it was giving > > > wrong > > > > > results when they are sliced buffers. > > > > > > > > > > I think we need to get the new allocator, along with proper > transfer > > of > > > > > ownership in order to do this correctly. Then we can just query the > > > > > allocator rather than trying to track it separately. > > > > > > > > > > On Fri, Nov 20, 2015 at 11:25 AM, Abdel Hakim Deneche < > > > > > adeneche@maprtech.com > > > > > > wrote: > > > > > > > > > > > I'm looking at the external sort code and it uses the following > > > method > > > > to > > > > > > compute the allocated size of a batch: > > > > > > > > > > > > private long getBufferSize(VectorAccessible batch) { > > > > > > > long size = 0; > > > > > > > for (VectorWrapper w : batch) { > > > > > > > DrillBuf[] bufs = w.getValueVector().getBuffers(false); > > > > > > > for (DrillBuf buf : bufs) { > > > > > > > if (*buf.isRootBuffer()*) { > > > > > > > size += buf.capacity(); > > > > > > > } > > > > > > > } > > > > > > > } > > > > > > > return size; > > > > > > > } > > > > > > > > > > > > > > > > > > This method only accounts for root buffers, but when we have a > > > receiver > > > > > > below the sort, most of (if not all) buffers are child buffers. > > This > > > > may > > > > > > delay spilling, and increase the memory usage of the drillbit. If > > my > > > > > > computations are correct, for a single query, one drillbit can > > > allocate > > > > > up > > > > > > to 40GB without spilling once to disk. > > > > > > > > > > > > Is there a specific reason we only account for root buffers ? > > > > > > > > > > > > -- > > > > > > > > > > > > Abdelhakim Deneche > > > > > > > > > > > > Software Engineer > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Now Available - Free Hadoop On-Demand Training > > > > > > < > > > > > > > > > > > > > > > > > > > > > http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Abdelhakim Deneche > > > > > > Software Engineer > > > > > > > > > > > > > > > Now Available - Free Hadoop On-Demand Training > > > < > > > > > > http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available > > > > > > > > > > -- Abdelhakim Deneche Software Engineer Now Available - Free Hadoop On-Demand Training --001a11444c12180aab0525020a04--