From user-return-209-archive-asf-public=cust-asf.ponee.io@orc.apache.org Tue Mar 27 03:32:17 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 94F49180671 for ; Tue, 27 Mar 2018 03:32:16 +0200 (CEST) Received: (qmail 92790 invoked by uid 500); 27 Mar 2018 01:32:15 -0000 Mailing-List: contact user-help@orc.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@orc.apache.org Delivered-To: mailing list user@orc.apache.org Received: (qmail 92664 invoked by uid 99); 27 Mar 2018 01:32:14 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Mar 2018 01:32:14 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 6E7B7C1D73 for ; Tue, 27 Mar 2018 01:32:14 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.002 X-Spam-Level: X-Spam-Status: No, score=-0.002 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=iq80-com.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 3BKQFv6E2oUr for ; Tue, 27 Mar 2018 01:32:13 +0000 (UTC) Received: from mail-pl0-f48.google.com (mail-pl0-f48.google.com [209.85.160.48]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id B58B15F3FF for ; Tue, 27 Mar 2018 01:32:12 +0000 (UTC) Received: by mail-pl0-f48.google.com with SMTP id p9-v6so13107564pls.2 for ; Mon, 26 Mar 2018 18:32:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iq80-com.20150623.gappssmtp.com; s=20150623; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=mASzCGLS8fjnZStWAOrsXV4KLtfGOD/oP4hNxnceyU0=; b=hWwvk29OwHrcpB+jATCRWaAbcveKMKSqUORZWBBE1hd2YkEuiOTZUNLMA6eiuaBxu6 c9cVm7+gwi7evVezYaBgrGXuCygYFJdSC22QahO5rMyUzSS1EpupVHKTxItsiYpQnFFA 8eFdCXEYxzFlVnqZJOvMikwCUoOsGzGtYgkz4Ib3YuaC8g38dtQXHhD+3bWCAIG534pz 603hBW84mVvnmLbA2I248wnH+Tc6//qy1f3H0W3hPi6kd8Y4qcLibyzqS/4LyGuYKfaF M2JxcVSAw/Z7UEbsofeuCPg1qYuql5hgr9mqvJOjHh7XA3gw3z/L3GseJvDZ8a8lFD9u PZAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=mASzCGLS8fjnZStWAOrsXV4KLtfGOD/oP4hNxnceyU0=; b=DutfKXpCZBRfQ6oNFJxi8ZWO6AJE+lAhWXcXC8p13X7IfkEm9uCCUpkRl6ScsPsV5z 57DlWYwG6PVarbe5W6Weaas11ySqrs0cXf+chV50zv1IQDRCNXTjMDij/cNQzFLFP8ju OwCsWNbXyVbBMlVCI23qwXNDflgbNDWEdHnw1lYIL/PYg1UQQ/FNEBUgLA4wMyy9qGri 5zZVAjT+UevqeYJHk0trnc2agpqx+CQQjGCNK4mmsYhTQvkseLTqFjTzc5KKN5LWJ3GK HK02o/7HWOEnZCXhgJ2+1Ce7gRwSX35V+HqCeglq6WvA/rkD5glu0EHw06Fo0c/EfIP8 +rxA== X-Gm-Message-State: AElRT7EunwGYli0CXjU4xzMs+4vIBohj7NKgsbe8NMrT6Ps5H1yH/roW 9dhQRF9hS8Wv+5lTMO1HyH5Z1w== X-Google-Smtp-Source: AG47ELvqEkgnxRXpZUaTle7mohmM506T1bzOfJawquQtqPQTSj98I4mwdnxXpEb+l8MWFrtLuY3lmw== X-Received: by 2002:a17:902:a981:: with SMTP id bh1-v6mr43831993plb.255.1522114331069; Mon, 26 Mar 2018 18:32:11 -0700 (PDT) Received: from ?IPv6:2620:10d:c082:10e4:c6c:ef6f:3cec:2796? ([2620:10d:c090:200::6:9ace]) by smtp.gmail.com with ESMTPSA id s68sm87768pgb.43.2018.03.26.18.32.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 26 Mar 2018 18:32:10 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 11.2 \(3445.5.20\)) Subject: Re: ORC double encoding optimization proposal From: Dain Sundstrom In-Reply-To: Date: Mon, 26 Mar 2018 18:32:08 -0700 Cc: "user@orc.apache.org" Content-Transfer-Encoding: quoted-printable Message-Id: <6D664A64-6755-40B8-A389-E3D1BD876EB6@iq80.com> References: <17B91B6B0D9BBC44A1682DABC201C53552055763@SHSMSX104.ccr.corp.intel.com> <17B91B6B0D9BBC44A1682DABC201C5355205EF6D@SHSMSX104.ccr.corp.intel.com> <8E7B3A58-2FFD-4336-A6FA-B79C3E3E851D@hortonworks.com> To: dev@orc.apache.org X-Mailer: Apple Mail (2.3445.5.20) For our installations, we sort the streams based on size before writing = them. This places all the small streams next to each other so a single = IO can grab all of them, and then the large streams are typically so = large they need multiple IOs anyway. This really helps when you have = (small) number columns mixed with (large) string columns. If you only = want the numbers, you end up doing a lot of IOs (because of the large = string comus in the middle), and with this model you have a higher = chance of getting a shared IO. -dain > On Mar 26, 2018, at 4:23 PM, Owen O'Malley = wrote: >=20 > This is a really interesting conversation. Of course, the original use = case > for ORC was that you were never reading less than a stripe. So putting = all > of the data streams for a column back to back, which isn't in the = spec, but > should be, was optimal in terms of seeks. >=20 > There are two cases that violate this assumption: > * you are using predicate push down and thus only need to read a few = row > groups. > * you are extending the reader to interleave the compression and io. >=20 > So a couple of layouts come to mind: >=20 > * Finish the compression chunks at the row group (10k rows) and = interleave > the streams for the column for each row group. > This would help with both predicate pushdown and the async io reader. > We would lose some compression by closing the compression chunks = early > and have additional overhead to track the sizes for the row group. > On the plus side we could simplify the indexes because the = compression > chunks would always align with with row groups. >=20 > * Divide each 256k (larger?) with the proportional part of each = stream. > Thus if the column has 3 streams and they were 50%, 30%, and 20% we = would > take > that much data from each 256k. This wouldn't reduce the compression = or > require any additional metadata, since the reader could determine the > number of > bytes of each stream per a "page". This wouldn't help very much for = PPD, > but would help for the async io reader. >=20 > So which use case maters the most? What other layouts would be = interesting? >=20 > .. Owen >=20 > On Mon, Mar 26, 2018 at 12:33 PM, Gopal Vijayaraghavan = > wrote: >=20 >>=20 >>> the bad thing is that we still have TWO encodings to discuss. >>=20 >> Two is exactly what we need, not five - from the existing ORC configs >>=20 >> hive.exec.orc.encoding.strategy=3D[SPEED, COMPRESSION]; >>=20 >> FLIP8 was my original suggestion to Teddy from the byteuniq UDF runs, >> though the regressions in compression over the PlainV2 is still = bothering >> me (which is why I went digging into the Zlib dictionary builder impl = with >> infgen). >>=20 >> All comparisons below are for Size & against PlainV2 >>=20 >> For Zlib, this is pretty bad for FLIP. >>=20 >> ZLIB:HIGGS Regressing on FLIP by 6 points >> ZLIB:DISCOUNT_AMT Regressing on FLIP by 10 points >> ZLIB:IOT_METER Regressing on FLIP by 32 points >> ZLIB:LIST_PRICE Regressing on FLIP by 36 points >> ZLIB:PHONE Regressing on FLIP by 50 points >>=20 >> SPLIT has no size regressions. >>=20 >> With ZSTD SPLIT has a couple of regressions in size >>=20 >> ZSTD:DISCOUNT_AMT Regressing on FLIP by 5 points >> ZSTD:IOT_METER Regressing on FLIP by 17 points >> ZSTD:HIGGS Regressing on FLIP by 18 points >> ZSTD:LIST_PRICE Regressing on FLIP by 30 points >> ZSTD:PHONE Regressing on FLIP by 55 points >>=20 >> ZSTD:HIGGS Regressing on SPLIT by 10 points >> ZSTD:PHONE Regressing on SPLIT by 3 points >>=20 >> but FLIP still has more size regressions & big ones there. >>=20 >> I'm continuing to mess with both algorithms, but I have wider = problems to >> fix in FLIP & at a lower algorithm level than in SPLIT. >>=20 >> Cheers, >> Gopal >>=20 >>=20 >>=20