Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A75DA10B2C for ; Mon, 24 Mar 2014 22:42:01 +0000 (UTC) Received: (qmail 25932 invoked by uid 500); 24 Mar 2014 22:41:55 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 25817 invoked by uid 500); 24 Mar 2014 22:41:53 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 25600 invoked by uid 99); 24 Mar 2014 22:41:50 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Mar 2014 22:41:50 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of josh.elser@gmail.com designates 209.85.213.41 as permitted sender) Received: from [209.85.213.41] (HELO mail-yh0-f41.google.com) (209.85.213.41) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Mar 2014 22:41:44 +0000 Received: by mail-yh0-f41.google.com with SMTP id v1so5995437yhn.0 for ; Mon, 24 Mar 2014 15:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=rxn0Tk8FZU8I+Qe2xzmWBocFg8F9+tDFiDng33Vla+Y=; b=mvI1phpWVWMFI8+tN0d9zrAwxBQ4BJFzpI/6xxcopu9JGtaILWrVfRntSwBygHwvKX sc6QUAIXCFRqdZyBX4uapksy1fk+LrNzPoSCRl0BjJvMPWhg53pT+ZNz0nQVjSSwspLN AgDAKqknmVqN620GISyioGmCAL/jqDmh44OsclzfRafu2yTILJspDWL4/qYrDVZIYSPD IRMbRFISnaN/HTnPm8pb7TALkwk/v3p7PRjkzk5GUowKYMCLc+7VJ0a6MGfyXCTc6wOY /k4qrhqdsDufSxC72NrkeWAlTeI58TSNRo2wJECWvS5t5eYQ0vt+PHbUe+ZH7CEWPbGS WiFw== X-Received: by 10.236.13.46 with SMTP id a34mr93442523yha.28.1395700882486; Mon, 24 Mar 2014 15:41:22 -0700 (PDT) Received: from HW10447.local ([192.175.27.2]) by mx.google.com with ESMTPSA id l21sm16770418yhl.19.2014.03.24.15.41.21 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 24 Mar 2014 15:41:21 -0700 (PDT) Message-ID: <5330B490.7020505@gmail.com> Date: Mon, 24 Mar 2014 15:41:20 -0700 From: Josh Elser User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: user@accumulo.apache.org Subject: Re: Combiner behaviour References: <532A1B2C.8040209@gmail.com> <532A1F8C.20001@gmail.com> <532BC4D5.3060209@gmail.com> In-Reply-To: <532BC4D5.3060209@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org Russ, Check out https://github.com/joshelser/accumulo-column-summing Using the SummingCombiner with a call to ScannerBase#fetchColumn(Text,Text) will be a pretty decent solution for modest data sets. The (better articulated than previously) reason why the SummingCombiner is sub-par is that it only sums within a single row and not across rows. This is the reason why making a custom iterator to sum across rows is desirable. Some results you can try running this microbenchmark from the test class in the above repository. It creates a table with 1M rows, 7 columns per row, and sums over a single column. We can lower the split threshold on our table to split it out into more Tablets which should give more realistic performance (pay the penalty for the RPC calls that you would at "scale"). The reduction in number of keys returned (and thus the amount of data over the wire) should be the primary reason this approach is desirable. Hope this makes things clearer! Number of splits for table: 65 Number of results to sum: 66 Time for iterator: 4482 ms Number of results to sum: 1000000 Time for combiner: 4314 ms Number of results to sum: 66 Time for iterator: 3651 ms Number of results to sum: 1000000 Time for combiner: 3754 ms Number of results to sum: 66 Time for iterator: 3685 ms Number of results to sum: 1000000 Time for combiner: 3839 ms Number of results to sum: 66 Time for iterator: 3643 ms Number of results to sum: 1000000 Time for combiner: 4066 ms Number of results to sum: 66 Time for iterator: 3880 ms Number of results to sum: 1000000 Time for combiner: 4084 ms On 3/20/14, 9:49 PM, Josh Elser wrote: > Russ, > > Close to it. I'll try to work up some actual code to what I'm suggesting. > > On 3/20/14, 1:12 AM, Russ Weeks wrote: >> Hi, Josh, >> >> Thanks for walking me through this. This is my first stab at it: >> >> public class RowSummingCombiner extends WrappingIterator { >> >> Key lastKey; >> long sum; >> >> public Key getTopKey() { >> >> if (lastKey == null) >> >> return super.getTopKey(); >> >> return lastKey; >> } >> public Value getTopValue() { >> >> lastKey = null; >> >> return new Value(Long.toString(sum).getBytes()); >> >> } >> public boolean hasTop() { >> >> return lastKey != null || super.hasTop(); >> >> } >> public void next() throws IOException { >> >> while (super.hasTop()) { >> >> lastKey = super.getTopKey(); >> >> if (!lastKey.isDeleted()) { >> >> sum += Long.parseLong(super.getTopValue().toString()); >> >> } >> super.next(); >> >> } >> } >> public SortedKeyValueIterator deepCopy(IteratorEnvironment >> env) { >> >> RowSummingCombiner instance = new RowSummingCombiner(); >> >> instance.setSource(getSource().deepCopy(env)); >> >> return instance; >> } >> } >> >> I restrict the scanner to the single CF/CQ that I'm interested in >> summing. The biggest disadvantage is that I can't utilize any of the >> logic in the Combiner class hierarchy for value decoding etc. because >> the logic to "combine" based on the common (row, cf, cq, vis) tuple is >> baked in at the top level of that hierarchy and I don't see an easy way >> to plug in new behaviour. But, each instance of the RowSummingCombiner >> returns its own sum, and then my client just has to add up a handful of >> values. Is this what you were getting at? >> >> Regards, >> -Russ >> >> >> On Wed, Mar 19, 2014 at 3:51 PM, Josh Elser > > wrote: >> >> Ummm, you got the gist of it (I may have misspoke in what I >> initially said). >> >> What my first thought was to make an iterator that will filter down >> to the columns that you want. It doesn't look like we have an >> iterator that will efficiently do this for you included in the core >> (although, I know I've done something similar in the past like >> this). This iterator would scan the rows on your table returning >> just the columns you want. >> >> 000200001ccaac30 meta:size [] 1807 >> 000200001cdaac30 meta:size [] 656 >> 000200001cfaac30 meta:size [] 565 >> >> Then, we could put the summing combiner on top of that iterator to >> sum those and get back a single key. The row in the key you return >> should be the last row you included in the sum. This way, if a retry >> happens under the hood by the batchscanner, you'll resume where you >> left off and won't double-count things. >> >> (you could even do things like sum a maximum of N rows before >> returning back some intermediate count to better parallelize things) >> >> 000200001cfaac30 meta:size [] 3028 >> >> So, each "ScanSession" (what the batchscanner is doing underneath >> the hood) would return you a value which your client would do a >> final summation. >> >> The final stack would be {(data from accumulo) > SKVI to project >> columns > summing combiner} > final summation, where {...} denotes >> work done server-side. This is one of those things that really >> shines with the Accumulo API. >> >> >> On 3/19/14, 6:40 PM, Russ Weeks wrote: >> >> Hi, Josh, >> >> Thanks very much for your response. I think I get what you're >> saying, >> but it's kind of blowing my mind. >> >> Are you saying that if I first set up an iterator that took my >> key/value >> pairs like, >> >> 000200001ccaac30 meta:size [] 1807 >> 000200001ccaac30 meta:source [] data2 >> 000200001cdaac30 meta:filename [] doc02985453 >> 000200001cdaac30 meta:size [] 656 >> 000200001cdaac30 meta:source [] data2 >> 000200001cfaac30 meta:filename [] doc04484522 >> 000200001cfaac30 meta:size [] 565 >> 000200001cfaac30 meta:source [] data2 >> 000200001dcaac30 meta:filename [] doc03342958 >> >> And emitted something like, >> >> 0 meta:size [] 1807 >> 0 meta:size [] 656 >> 0 meta:size [] 565 >> >> And then applied a SummingCombiner at a lower priority than that >> iterator, then... it should work, right? >> >> I'll give it a try. >> >> Regards, >> -Russ >> >> >> On Wed, Mar 19, 2014 at 3:33 PM, Josh Elser >> >> >> >> wrote: >> >> Russ, >> >> Remember about the distribution of data across multiple >> nodes in >> your cluster by tablet. >> >> A tablet, at the very minimum, will contain one row. Any >> way to say >> that same thing is that a row will never be split across >> multiple >> tablets. The only guarantee you get from Accumulo here is >> that you >> can use a combiner to do you combination across one row. >> >> However, when you combine (pun not intended) another SKVI >> with the >> Combiner, you can do more merging of that intermediate >> "combined >> value" from each row before returning back to the client. >> You can >> think of this approach as doing a multi-level summation. >> >> This still requires one final sum on the client side, but >> you should >> get quite the reduction with this approach over doing the >> entire sum >> client side. You sum the meta:size column in parallel >> across parts >> of the table (server-side) and then client-side you sum the >> sums >> from each part. >> >> I can sketch this out in more detail if it's not clear. HTH >> >> >> On 3/19/14, 6:18 PM, Russ Weeks wrote: >> >> The accumulo manual states that combiners can be >> applied to >> values which >> share the same rowID, column family, and column >> qualifier. Is >> there any >> way to adjust this behaviour? I have rows that look >> like, >> >> 000200001ccaac30 meta:size [] 1807 >> 000200001ccaac30 meta:source [] data2 >> 000200001cdaac30 meta:filename [] doc02985453 >> 000200001cdaac30 meta:size [] 656 >> 000200001cdaac30 meta:source [] data2 >> 000200001cfaac30 meta:filename [] doc04484522 >> 000200001cfaac30 meta:size [] 565 >> 000200001cfaac30 meta:source [] data2 >> 000200001dcaac30 meta:filename [] doc03342958 >> >> and I'd like to sum up all the values of meta:size >> across all >> rows. I >> know I can scan the sizes and sum them on the client >> side, but I was >> hoping there would be a way to do this inside my >> cluster. Is >> mapreduce >> my only option here? >> >> Thanks, >> -Russ >> >> >>