From issues-return-196160-archive-asf-public=cust-asf.ponee.io@hive.apache.org Tue Jul 21 14:59:43 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mailroute1-lw-us.apache.org (mailroute1-lw-us.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with ESMTPS id 8E92918065B for ; Tue, 21 Jul 2020 16:59:43 +0200 (CEST) Received: from mail.apache.org (localhost [127.0.0.1]) by mailroute1-lw-us.apache.org (ASF Mail Server at mailroute1-lw-us.apache.org) with SMTP id A8981125805 for ; Tue, 21 Jul 2020 14:59:07 +0000 (UTC) Received: (qmail 66938 invoked by uid 500); 21 Jul 2020 14:59:03 -0000 Mailing-List: contact issues-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list issues@hive.apache.org Received: (qmail 66803 invoked by uid 99); 21 Jul 2020 14:59:02 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Jul 2020 14:59:02 +0000 Received: from jira-he-de.apache.org (static.172.67.40.188.clients.your-server.de [188.40.67.172]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 202D5427CD for ; Tue, 21 Jul 2020 14:59:02 +0000 (UTC) Received: from jira-he-de.apache.org (localhost.localdomain [127.0.0.1]) by jira-he-de.apache.org (ASF Mail Server at jira-he-de.apache.org) with ESMTP id 8661878246C for ; Tue, 21 Jul 2020 14:59:00 +0000 (UTC) Date: Tue, 21 Jul 2020 14:59:00 +0000 (UTC) From: "ASF GitHub Bot (Jira)" To: issues@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Work logged] (HIVE-23716) Support Anti Join in Hive MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-23716?focusedWorklogId=3D= 461628&page=3Dcom.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpa= nel#worklog-461628 ] ASF GitHub Bot logged work on HIVE-23716: ----------------------------------------- Author: ASF GitHub Bot Created on: 21/Jul/20 14:58 Start Date: 21/Jul/20 14:58 Worklog Time Spent: 10m=20 Work Description: pgaref commented on a change in pull request #1147: URL: https://github.com/apache/hive/pull/1147#discussion_r458164592 ########## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/Vector= MapJoinAntiJoinLongOperator.java ########## @@ -0,0 +1,315 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied= . + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hive.ql.exec.vector.mapjoin; + +import org.apache.hadoop.hive.ql.CompilationOpContext; +import org.apache.hadoop.hive.ql.exec.JoinUtil; +import org.apache.hadoop.hive.ql.exec.vector.LongColumnVector; +import org.apache.hadoop.hive.ql.exec.vector.VectorizationContext; +import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch; +import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression; +import org.apache.hadoop.hive.ql.exec.vector.mapjoin.hashtable.VectorMapJo= inLongHashSet; +import org.apache.hadoop.hive.ql.metadata.HiveException; +import org.apache.hadoop.hive.ql.plan.OperatorDesc; +import org.apache.hadoop.hive.ql.plan.VectorDesc; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Arrays; + +// TODO : Duplicate codes need to merge with semi join. +// Single-Column Long hash table import. +// Single-Column Long specific imports. + +/* + * Specialized class for doing a vectorized map join that is an anti join = on a Single-Column Long + * using a hash set. + */ +public class VectorMapJoinAntiJoinLongOperator extends VectorMapJoinAntiJo= inGenerateResultOperator { + + private static final long serialVersionUID =3D 1L; + private static final String CLASS_NAME =3D VectorMapJoinAntiJoinLongOper= ator.class.getName(); + private static final Logger LOG =3D LoggerFactory.getLogger(CLASS_NAME); + protected String getLoggingPrefix() { + return super.getLoggingPrefix(CLASS_NAME); + } + + // The above members are initialized by the constructor and must not be + // transient. + + // The hash map for this specialized class. + private transient VectorMapJoinLongHashSet hashSet; + + // Single-Column Long specific members. + // For integers, we have optional min/max filtering. + private transient boolean useMinMax; + private transient long min; + private transient long max; + + // The column number for this one column join specialization. + private transient int singleJoinColumn; + + // Pass-thru constructors. + /** Kryo ctor. */ + protected VectorMapJoinAntiJoinLongOperator() { + super(); + } + + public VectorMapJoinAntiJoinLongOperator(CompilationOpContext ctx) { + super(ctx); + } + + public VectorMapJoinAntiJoinLongOperator(CompilationOpContext ctx, Opera= torDesc conf, + VectorizationContext vContext, = VectorDesc vectorDesc) throws HiveException { + super(ctx, conf, vContext, vectorDesc); + } + + // Process Single-Column Long Anti Join on a vectorized row batch. + @Override + protected void commonSetup() throws HiveException { + super.commonSetup(); + + // Initialize Single-Column Long members for this specialized class. + singleJoinColumn =3D bigTableKeyColumnMap[0]; + } + + @Override + public void hashTableSetup() throws HiveException { + super.hashTableSetup(); + + // Get our Single-Column Long hash set information for this specialize= d class. + hashSet =3D (VectorMapJoinLongHashSet) vectorMapJoinHashTable; + useMinMax =3D hashSet.useMinMax(); + if (useMinMax) { + min =3D hashSet.min(); + max =3D hashSet.max(); + } + } + + @Override + public void processBatch(VectorizedRowBatch batch) throws HiveException = { + + try { + // (Currently none) + // antiPerBatchSetup(batch); + + // For anti joins, we may apply the filter(s) now. + for(VectorExpression ve : bigTableFilterExpressions) { + ve.evaluate(batch); + } + + final int inputLogicalSize =3D batch.size; + if (inputLogicalSize =3D=3D 0) { + return; + } + + // Perform any key expressions. Results will go into scratch column= s. + if (bigTableKeyExpressions !=3D null) { + for (VectorExpression ve : bigTableKeyExpressions) { + ve.evaluate(batch); + } + } + + // The one join column for this specialized class. + LongColumnVector joinColVector =3D (LongColumnVector) batch.cols[sin= gleJoinColumn]; + long[] vector =3D joinColVector.vector; + + // Check single column for repeating. + boolean allKeyInputColumnsRepeating =3D joinColVector.isRepeating; + + if (allKeyInputColumnsRepeating) { + // All key input columns are repeating. Generate key once. Looku= p once. + // Since the key is repeated, we must use entry 0 regardless of se= lectedInUse. + JoinUtil.JoinResult joinResult; + if (!joinColVector.noNulls && joinColVector.isNull[0]) { + // For anti join, if the right side is null then its a match. + joinResult =3D JoinUtil.JoinResult.MATCH; + } else { + long key =3D vector[0]; + if (useMinMax && (key < min || key > max)) { + // Out of range for whole batch. Its a match for anti join. We= can emit the row. + joinResult =3D JoinUtil.JoinResult.MATCH; + } else { + joinResult =3D hashSet.contains(key, hashSetResults[0]); + // reverse the join result for anti join. + if (joinResult =3D=3D JoinUtil.JoinResult.NOMATCH) { + joinResult =3D JoinUtil.JoinResult.MATCH; + } else if (joinResult =3D=3D JoinUtil.JoinResult.MATCH) { + joinResult =3D JoinUtil.JoinResult.NOMATCH; + } + } + } + + // Common repeated join result processing. + if (LOG.isDebugEnabled()) { + LOG.debug(CLASS_NAME + " batch #" + batchCounter + " repeated jo= inResult " + joinResult.name()); + } + finishAntiRepeated(batch, joinResult, hashSetResults[0]); + } else { + // NOT Repeating. + + if (LOG.isDebugEnabled()) { + LOG.debug(CLASS_NAME + " batch #" + batchCounter + " non-repeate= d"); + } + + // We remember any matching rows in matches / matchSize. At the e= nd of the loop, + // selected / batch.size will represent both matching and non-matc= hing rows for outer join. + // Only deferred rows will have been removed from selected. + int selected[] =3D batch.selected; + boolean selectedInUse =3D batch.selectedInUse; + + int hashSetResultCount =3D 0; + int allMatchCount =3D 0; + int spillCount =3D 0; + long saveKey =3D 0; + + // We optimize performance by only looking up the first key in a s= eries of equal keys. + boolean haveSaveKey =3D false; + JoinUtil.JoinResult saveJoinResult =3D JoinUtil.JoinResult.NOMATCH= ; + + // Logical loop over the rows in the batch since the batch may hav= e selected in use. + for (int logical =3D 0; logical < inputLogicalSize; logical++) { + int batchIndex =3D (selectedInUse ? selected[logical] : logical)= ; + + // Single-Column Long get key. + long currentKey; + boolean isNull; + if (!joinColVector.noNulls && joinColVector.isNull[batchIndex]) = { + currentKey =3D 0; + isNull =3D true; + } else { + currentKey =3D vector[batchIndex]; + isNull =3D false; + } + + // Equal key series checking. + if (isNull || !haveSaveKey || currentKey !=3D saveKey) { Review comment: It seems that this could be simplified (not sure haveSaveKey variabl= e is needed) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: users@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 461628) Time Spent: 3.5h (was: 3h 20m) > Support Anti Join in Hive=20 > -------------------------- > > Key: HIVE-23716 > URL: https://issues.apache.org/jira/browse/HIVE-23716 > Project: Hive > Issue Type: Bug > Reporter: mahesh kumar behera > Assignee: mahesh kumar behera > Priority: Major > Labels: pull-request-available > Attachments: HIVE-23716.01.patch > > Time Spent: 3.5h > Remaining Estimate: 0h > > Currently hive does not support Anti join. The query for anti join is con= verted to left outer join and null filter on right side join key is added t= o get the desired result. This is causing > # Extra computation =E2=80=94 The left outer join projects the redundant= columns from right side. Along with that, filtering is done to remove the = redundant rows. This is can be avoided in case of anti join as anti join wi= ll project only the required columns and rows from the left side table. > # Extra shuffle =E2=80=94 In case of anti join the duplicate records mov= ed to join node can be avoided from the child node. This can reduce signifi= cant amount of data movement if the number of distinct rows( join keys) is = significant. > # Extra Memory Usage - In case of map based anti join , hash set is suff= icient as just the key is required to check=C2=A0 if the records matches th= e join condition. In case of left join, we need the key and the non key col= umns also and thus a hash table will be required. > For a query like > {code:java} > select wr_order_number FROM web_returns LEFT JOIN web_sales=C2=A0 ON wr_= order_number =3D ws_order_number WHERE ws_order_number IS NULL;{code} > The number of distinct ws_order_number in web_sales table in a typical 10= TB TPCDS set up is just 10% of total records. So when we convert this query= to anti join, instead of 7 billion rows, only 600 million rows are moved t= o join node. > In the current patch, just one conversion is done. The pattern of project= ->filter->left-join is converted to project->anti-join. This will take care= of sub queries with =E2=80=9Cnot exists=E2=80=9D clause. The queries with = =E2=80=9Cnot exists=E2=80=9D are converted first to filter + left-join and = then its converted to anti join. The queries with =E2=80=9Cnot in=E2=80=9D = are not handled in the current patch. > From execution side, both merge join and map join with vectorized executi= on=C2=A0 is supported for anti join. -- This message was sent by Atlassian Jira (v8.3.4#803005)