Return-Path: X-Original-To: apmail-hive-commits-archive@www.apache.org Delivered-To: apmail-hive-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8142DD927 for ; Tue, 25 Sep 2012 17:32:59 +0000 (UTC) Received: (qmail 33010 invoked by uid 500); 25 Sep 2012 17:32:59 -0000 Delivered-To: apmail-hive-commits-archive@hive.apache.org Received: (qmail 32908 invoked by uid 500); 25 Sep 2012 17:32:58 -0000 Mailing-List: contact commits-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hive-dev@hive.apache.org Delivered-To: mailing list commits@hive.apache.org Received: (qmail 32896 invoked by uid 99); 25 Sep 2012 17:32:58 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Sep 2012 17:32:58 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Sep 2012 17:32:55 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id AE5512388900 for ; Tue, 25 Sep 2012 17:32:12 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1390010 [2/2] - in /hive/trunk/ql/src: java/org/apache/hadoop/hive/ql/ java/org/apache/hadoop/hive/ql/exec/ java/org/apache/hadoop/hive/ql/optimizer/ java/org/apache/hadoop/hive/ql/parse/ java/org/apache/hadoop/hive/ql/plan/ test/queries/c... Date: Tue, 25 Sep 2012 17:32:11 -0000 To: commits@hive.apache.org From: namit@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120925173212.AE5512388900@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Added: hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out?rev=1390010&view=auto ============================================================================== --- hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out (added) +++ hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out Tue Sep 25 17:32:09 2012 @@ -0,0 +1,1088 @@ +PREHOOK: query: -- HIVE-3411 Filter predicates on outer join overlapped on single alias is not handled properly + +create table a as SELECT 100 as key, a.value as value FROM src LATERAL VIEW explode(array(40, 50, 60)) a as value limit 3 +PREHOOK: type: CREATETABLE_AS_SELECT +PREHOOK: Input: default@src +POSTHOOK: query: -- HIVE-3411 Filter predicates on outer join overlapped on single alias is not handled properly + +create table a as SELECT 100 as key, a.value as value FROM src LATERAL VIEW explode(array(40, 50, 60)) a as value limit 3 +POSTHOOK: type: CREATETABLE_AS_SELECT +POSTHOOK: Input: default@src +POSTHOOK: Output: default@a +PREHOOK: query: -- overlap on a +explain extended select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) +PREHOOK: type: QUERY +POSTHOOK: query: -- overlap on a +explain extended select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) +POSTHOOK: type: QUERY +ABSTRACT SYNTAX TREE: + (TOK_QUERY (TOK_FROM (TOK_LEFTOUTERJOIN (TOK_LEFTOUTERJOIN (TOK_TABREF (TOK_TABNAME a)) (TOK_TABREF (TOK_TABNAME a) b) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key)) (= (. (TOK_TABLE_OR_COL a) value) 50)) (= (. (TOK_TABLE_OR_COL b) value) 50))) (TOK_TABREF (TOK_TABNAME a) c) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL c) key)) (= (. (TOK_TABLE_OR_COL a) value) 60)) (= (. (TOK_TABLE_OR_COL c) value) 60)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) + +STAGE DEPENDENCIES: + Stage-1 is a root stage + Stage-0 is a root stage + +STAGE PLANS: + Stage: Stage-1 + Map Reduce + Alias -> Map Operator Tree: + a + TableScan + alias: a + GatherStats: false + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 0 + value expressions: + expr: key + type: int + expr: value + type: int + b + TableScan + alias: b + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 50) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 1 + value expressions: + expr: key + type: int + expr: value + type: int + c + TableScan + alias: c + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 60) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 2 + value expressions: + expr: key + type: int + expr: value + type: int + Needs Tagging: true + Path -> Alias: +#### A masked pattern was here #### + Path -> Partition: +#### A masked pattern was here #### + Partition + base file name: a + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + name: default.a + name: default.a + Reduce Operator Tree: + Join Operator + condition map: + Left Outer Join0 to 1 + Left Outer Join0 to 2 + condition expressions: + 0 {VALUE._col0} {VALUE._col1} + 1 {VALUE._col0} {VALUE._col1} + 2 {VALUE._col0} {VALUE._col1} + filter mappings: + 0 [1, 1, 2, 1] + filter predicates: + 0 {(VALUE._col1 = 50)} {(VALUE._col1 = 60)} + 1 + 2 + handleSkewJoin: false + outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9 + Select Operator + expressions: + expr: _col0 + type: int + expr: _col1 + type: int + expr: _col4 + type: int + expr: _col5 + type: int + expr: _col8 + type: int + expr: _col9 + type: int + outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5 + File Output Operator + compressed: false + GlobalTableId: 0 +#### A masked pattern was here #### + NumFilesPerFileSink: 1 +#### A masked pattern was here #### + table: + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + columns _col0,_col1,_col2,_col3,_col4,_col5 + columns.types int:int:int:int:int:int + escape.delim \ + serialization.format 1 + TotalFiles: 1 + GatherStats: false + MultiFileSpray: false + + Stage: Stage-0 + Fetch Operator + limit: -1 + + +PREHOOK: query: select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +100 40 NULL NULL NULL NULL +100 50 100 50 NULL NULL +100 60 NULL NULL 100 60 +PREHOOK: query: select /*+ MAPJOIN(b,c)*/ * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select /*+ MAPJOIN(b,c)*/ * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +100 40 NULL NULL NULL NULL +100 50 100 50 NULL NULL +100 60 NULL NULL 100 60 +PREHOOK: query: -- overlap on b +explain extended select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) +PREHOOK: type: QUERY +POSTHOOK: query: -- overlap on b +explain extended select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) +POSTHOOK: type: QUERY +ABSTRACT SYNTAX TREE: + (TOK_QUERY (TOK_FROM (TOK_LEFTOUTERJOIN (TOK_RIGHTOUTERJOIN (TOK_TABREF (TOK_TABNAME a)) (TOK_TABREF (TOK_TABNAME a) b) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key)) (= (. (TOK_TABLE_OR_COL a) value) 50)) (= (. (TOK_TABLE_OR_COL b) value) 50))) (TOK_TABREF (TOK_TABNAME a) c) (AND (AND (= (. (TOK_TABLE_OR_COL b) key) (. (TOK_TABLE_OR_COL c) key)) (= (. (TOK_TABLE_OR_COL b) value) 60)) (= (. (TOK_TABLE_OR_COL c) value) 60)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) + +STAGE DEPENDENCIES: + Stage-1 is a root stage + Stage-0 is a root stage + +STAGE PLANS: + Stage: Stage-1 + Map Reduce + Alias -> Map Operator Tree: + a + TableScan + alias: a + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 50) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 0 + value expressions: + expr: key + type: int + expr: value + type: int + b + TableScan + alias: b + GatherStats: false + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 1 + value expressions: + expr: key + type: int + expr: value + type: int + c + TableScan + alias: c + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 60) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 2 + value expressions: + expr: key + type: int + expr: value + type: int + Needs Tagging: true + Path -> Alias: +#### A masked pattern was here #### + Path -> Partition: +#### A masked pattern was here #### + Partition + base file name: a + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + name: default.a + name: default.a + Reduce Operator Tree: + Join Operator + condition map: + Right Outer Join0 to 1 + Left Outer Join1 to 2 + condition expressions: + 0 {VALUE._col0} {VALUE._col1} + 1 {VALUE._col0} {VALUE._col1} + 2 {VALUE._col0} {VALUE._col1} + filter mappings: + 1 [0, 1, 2, 1] + filter predicates: + 0 + 1 {(VALUE._col1 = 50)} {(VALUE._col1 = 60)} + 2 + handleSkewJoin: false + outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9 + Select Operator + expressions: + expr: _col0 + type: int + expr: _col1 + type: int + expr: _col4 + type: int + expr: _col5 + type: int + expr: _col8 + type: int + expr: _col9 + type: int + outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5 + File Output Operator + compressed: false + GlobalTableId: 0 +#### A masked pattern was here #### + NumFilesPerFileSink: 1 +#### A masked pattern was here #### + table: + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + columns _col0,_col1,_col2,_col3,_col4,_col5 + columns.types int:int:int:int:int:int + escape.delim \ + serialization.format 1 + TotalFiles: 1 + GatherStats: false + MultiFileSpray: false + + Stage: Stage-0 + Fetch Operator + limit: -1 + + +PREHOOK: query: select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +NULL NULL 100 40 NULL NULL +100 50 100 50 NULL NULL +NULL NULL 100 60 100 60 +PREHOOK: query: select /*+ MAPJOIN(a,c)*/ * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select /*+ MAPJOIN(a,c)*/ * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +NULL NULL 100 40 NULL NULL +100 50 100 50 NULL NULL +NULL NULL 100 60 100 60 +PREHOOK: query: -- overlap on b with two filters for each +explain extended select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50 AND b.value>10) left outer join a c on (b.key=c.key AND b.value=60 AND b.value>20 AND c.value=60) +PREHOOK: type: QUERY +POSTHOOK: query: -- overlap on b with two filters for each +explain extended select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50 AND b.value>10) left outer join a c on (b.key=c.key AND b.value=60 AND b.value>20 AND c.value=60) +POSTHOOK: type: QUERY +ABSTRACT SYNTAX TREE: + (TOK_QUERY (TOK_FROM (TOK_LEFTOUTERJOIN (TOK_RIGHTOUTERJOIN (TOK_TABREF (TOK_TABNAME a)) (TOK_TABREF (TOK_TABNAME a) b) (AND (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key)) (= (. (TOK_TABLE_OR_COL a) value) 50)) (= (. (TOK_TABLE_OR_COL b) value) 50)) (> (. (TOK_TABLE_OR_COL b) value) 10))) (TOK_TABREF (TOK_TABNAME a) c) (AND (AND (AND (= (. (TOK_TABLE_OR_COL b) key) (. (TOK_TABLE_OR_COL c) key)) (= (. (TOK_TABLE_OR_COL b) value) 60)) (> (. (TOK_TABLE_OR_COL b) value) 20)) (= (. (TOK_TABLE_OR_COL c) value) 60)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) + +STAGE DEPENDENCIES: + Stage-1 is a root stage + Stage-0 is a root stage + +STAGE PLANS: + Stage: Stage-1 + Map Reduce + Alias -> Map Operator Tree: + a + TableScan + alias: a + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 50) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 0 + value expressions: + expr: key + type: int + expr: value + type: int + b + TableScan + alias: b + GatherStats: false + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 1 + value expressions: + expr: key + type: int + expr: value + type: int + c + TableScan + alias: c + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 60) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 2 + value expressions: + expr: key + type: int + expr: value + type: int + Needs Tagging: true + Path -> Alias: +#### A masked pattern was here #### + Path -> Partition: +#### A masked pattern was here #### + Partition + base file name: a + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + name: default.a + name: default.a + Reduce Operator Tree: + Join Operator + condition map: + Right Outer Join0 to 1 + Left Outer Join1 to 2 + condition expressions: + 0 {VALUE._col0} {VALUE._col1} + 1 {VALUE._col0} {VALUE._col1} + 2 {VALUE._col0} {VALUE._col1} + filter mappings: + 1 [0, 2, 2, 2] + filter predicates: + 0 + 1 {(VALUE._col1 = 50)} {(VALUE._col1 > 10)} {(VALUE._col1 = 60)} {(VALUE._col1 > 20)} + 2 + handleSkewJoin: false + outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9 + Select Operator + expressions: + expr: _col0 + type: int + expr: _col1 + type: int + expr: _col4 + type: int + expr: _col5 + type: int + expr: _col8 + type: int + expr: _col9 + type: int + outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5 + File Output Operator + compressed: false + GlobalTableId: 0 +#### A masked pattern was here #### + NumFilesPerFileSink: 1 +#### A masked pattern was here #### + table: + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + columns _col0,_col1,_col2,_col3,_col4,_col5 + columns.types int:int:int:int:int:int + escape.delim \ + serialization.format 1 + TotalFiles: 1 + GatherStats: false + MultiFileSpray: false + + Stage: Stage-0 + Fetch Operator + limit: -1 + + +PREHOOK: query: select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50 AND b.value>10) left outer join a c on (b.key=c.key AND b.value=60 AND b.value>20 AND c.value=60) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50 AND b.value>10) left outer join a c on (b.key=c.key AND b.value=60 AND b.value>20 AND c.value=60) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +NULL NULL 100 40 NULL NULL +100 50 100 50 NULL NULL +NULL NULL 100 60 100 60 +PREHOOK: query: select /*+ MAPJOIN(a,c)*/ * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50 AND b.value>10) left outer join a c on (b.key=c.key AND b.value=60 AND b.value>20 AND c.value=60) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select /*+ MAPJOIN(a,c)*/ * from a right outer join a b on (a.key=b.key AND a.value=50 AND b.value=50 AND b.value>10) left outer join a c on (b.key=c.key AND b.value=60 AND b.value>20 AND c.value=60) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +NULL NULL 100 40 NULL NULL +100 50 100 50 NULL NULL +NULL NULL 100 60 100 60 +PREHOOK: query: -- overlap on a, b +explain extended select * from a full outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +PREHOOK: type: QUERY +POSTHOOK: query: -- overlap on a, b +explain extended select * from a full outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +POSTHOOK: type: QUERY +ABSTRACT SYNTAX TREE: + (TOK_QUERY (TOK_FROM (TOK_LEFTOUTERJOIN (TOK_LEFTOUTERJOIN (TOK_FULLOUTERJOIN (TOK_TABREF (TOK_TABNAME a)) (TOK_TABREF (TOK_TABNAME a) b) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key)) (= (. (TOK_TABLE_OR_COL a) value) 50)) (= (. (TOK_TABLE_OR_COL b) value) 50))) (TOK_TABREF (TOK_TABNAME a) c) (AND (AND (= (. (TOK_TABLE_OR_COL b) key) (. (TOK_TABLE_OR_COL c) key)) (= (. (TOK_TABLE_OR_COL b) value) 60)) (= (. (TOK_TABLE_OR_COL c) value) 60))) (TOK_TABREF (TOK_TABNAME a) d) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL d) key)) (= (. (TOK_TABLE_OR_COL a) value) 40)) (= (. (TOK_TABLE_OR_COL d) value) 40)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) + +STAGE DEPENDENCIES: + Stage-1 is a root stage + Stage-0 is a root stage + +STAGE PLANS: + Stage: Stage-1 + Map Reduce + Alias -> Map Operator Tree: + a + TableScan + alias: a + GatherStats: false + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 0 + value expressions: + expr: key + type: int + expr: value + type: int + b + TableScan + alias: b + GatherStats: false + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 1 + value expressions: + expr: key + type: int + expr: value + type: int + c + TableScan + alias: c + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 60) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 3 + value expressions: + expr: key + type: int + expr: value + type: int + d + TableScan + alias: d + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 40) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 2 + value expressions: + expr: key + type: int + expr: value + type: int + Needs Tagging: true + Path -> Alias: +#### A masked pattern was here #### + Path -> Partition: +#### A masked pattern was here #### + Partition + base file name: a + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + name: default.a + name: default.a + Reduce Operator Tree: + Join Operator + condition map: + Outer Join 0 to 1 + Left Outer Join0 to 2 + Left Outer Join1 to 3 + condition expressions: + 0 {VALUE._col0} {VALUE._col1} + 1 {VALUE._col0} {VALUE._col1} + 2 {VALUE._col0} {VALUE._col1} + 3 {VALUE._col0} {VALUE._col1} + filter mappings: + 0 [1, 1, 2, 1] + 1 [0, 1, 3, 1] + filter predicates: + 0 {(VALUE._col1 = 50)} {(VALUE._col1 = 40)} + 1 {(VALUE._col1 = 50)} {(VALUE._col1 = 60)} + 2 + 3 + handleSkewJoin: false + outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9, _col12, _col13 + Select Operator + expressions: + expr: _col0 + type: int + expr: _col1 + type: int + expr: _col4 + type: int + expr: _col5 + type: int + expr: _col12 + type: int + expr: _col13 + type: int + expr: _col8 + type: int + expr: _col9 + type: int + outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7 + File Output Operator + compressed: false + GlobalTableId: 0 +#### A masked pattern was here #### + NumFilesPerFileSink: 1 +#### A masked pattern was here #### + table: + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + columns _col0,_col1,_col2,_col3,_col4,_col5,_col6,_col7 + columns.types int:int:int:int:int:int:int:int + escape.delim \ + serialization.format 1 + TotalFiles: 1 + GatherStats: false + MultiFileSpray: false + + Stage: Stage-0 + Fetch Operator + limit: -1 + + +PREHOOK: query: select * from a full outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select * from a full outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +100 40 NULL NULL NULL NULL 100 40 +NULL NULL 100 40 NULL NULL NULL NULL +100 40 NULL NULL NULL NULL 100 40 +100 40 NULL NULL NULL NULL 100 40 +NULL NULL 100 60 100 60 NULL NULL +100 50 NULL NULL NULL NULL NULL NULL +NULL NULL 100 40 NULL NULL NULL NULL +100 50 100 50 NULL NULL NULL NULL +100 50 NULL NULL NULL NULL NULL NULL +NULL NULL 100 60 100 60 NULL NULL +100 60 NULL NULL NULL NULL NULL NULL +NULL NULL 100 40 NULL NULL NULL NULL +100 60 NULL NULL NULL NULL NULL NULL +100 60 NULL NULL NULL NULL NULL NULL +NULL NULL 100 60 100 60 NULL NULL +PREHOOK: query: -- triple overlap on a +explain extended select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +PREHOOK: type: QUERY +POSTHOOK: query: -- triple overlap on a +explain extended select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +POSTHOOK: type: QUERY +ABSTRACT SYNTAX TREE: + (TOK_QUERY (TOK_FROM (TOK_LEFTOUTERJOIN (TOK_LEFTOUTERJOIN (TOK_LEFTOUTERJOIN (TOK_TABREF (TOK_TABNAME a)) (TOK_TABREF (TOK_TABNAME a) b) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key)) (= (. (TOK_TABLE_OR_COL a) value) 50)) (= (. (TOK_TABLE_OR_COL b) value) 50))) (TOK_TABREF (TOK_TABNAME a) c) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL c) key)) (= (. (TOK_TABLE_OR_COL a) value) 60)) (= (. (TOK_TABLE_OR_COL c) value) 60))) (TOK_TABREF (TOK_TABNAME a) d) (AND (AND (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL d) key)) (= (. (TOK_TABLE_OR_COL a) value) 40)) (= (. (TOK_TABLE_OR_COL d) value) 40)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) + +STAGE DEPENDENCIES: + Stage-1 is a root stage + Stage-0 is a root stage + +STAGE PLANS: + Stage: Stage-1 + Map Reduce + Alias -> Map Operator Tree: + a + TableScan + alias: a + GatherStats: false + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 0 + value expressions: + expr: key + type: int + expr: value + type: int + b + TableScan + alias: b + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 50) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 1 + value expressions: + expr: key + type: int + expr: value + type: int + c + TableScan + alias: c + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 60) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 2 + value expressions: + expr: key + type: int + expr: value + type: int + d + TableScan + alias: d + GatherStats: false + Filter Operator + isSamplingPred: false + predicate: + expr: (value = 40) + type: boolean + Reduce Output Operator + key expressions: + expr: key + type: int + sort order: + + Map-reduce partition columns: + expr: key + type: int + tag: 3 + value expressions: + expr: key + type: int + expr: value + type: int + Needs Tagging: true + Path -> Alias: +#### A masked pattern was here #### + Path -> Partition: +#### A masked pattern was here #### + Partition + base file name: a + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + bucket_count -1 + columns key,value + columns.types int:int +#### A masked pattern was here #### + name default.a + numFiles 1 + numPartitions 0 + numRows 3 + rawDataSize 18 + serialization.ddl struct a { i32 key, i32 value} + serialization.format 1 + serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + totalSize 21 +#### A masked pattern was here #### + serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe + name: default.a + name: default.a + Reduce Operator Tree: + Join Operator + condition map: + Left Outer Join0 to 1 + Left Outer Join0 to 2 + Left Outer Join0 to 3 + condition expressions: + 0 {VALUE._col0} {VALUE._col1} + 1 {VALUE._col0} {VALUE._col1} + 2 {VALUE._col0} {VALUE._col1} + 3 {VALUE._col0} {VALUE._col1} + filter mappings: + 0 [1, 1, 2, 1, 3, 1] + filter predicates: + 0 {(VALUE._col1 = 50)} {(VALUE._col1 = 60)} {(VALUE._col1 = 40)} + 1 + 2 + 3 + handleSkewJoin: false + outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9, _col12, _col13 + Select Operator + expressions: + expr: _col0 + type: int + expr: _col1 + type: int + expr: _col4 + type: int + expr: _col5 + type: int + expr: _col8 + type: int + expr: _col9 + type: int + expr: _col12 + type: int + expr: _col13 + type: int + outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7 + File Output Operator + compressed: false + GlobalTableId: 0 +#### A masked pattern was here #### + NumFilesPerFileSink: 1 +#### A masked pattern was here #### + table: + input format: org.apache.hadoop.mapred.TextInputFormat + output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat + properties: + columns _col0,_col1,_col2,_col3,_col4,_col5,_col6,_col7 + columns.types int:int:int:int:int:int:int:int + escape.delim \ + serialization.format 1 + TotalFiles: 1 + GatherStats: false + MultiFileSpray: false + + Stage: Stage-0 + Fetch Operator + limit: -1 + + +PREHOOK: query: select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +100 40 NULL NULL NULL NULL 100 40 +100 50 100 50 NULL NULL NULL NULL +100 60 NULL NULL 100 60 NULL NULL +PREHOOK: query: select /*+ MAPJOIN(b,c, d)*/ * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +PREHOOK: type: QUERY +PREHOOK: Input: default@a +#### A masked pattern was here #### +POSTHOOK: query: select /*+ MAPJOIN(b,c, d)*/ * from a left outer join a b on (a.key=b.key AND a.value=50 AND b.value=50) left outer join a c on (a.key=c.key AND a.value=60 AND c.value=60) left outer join a d on (a.key=d.key AND a.value=40 AND d.value=40) +POSTHOOK: type: QUERY +POSTHOOK: Input: default@a +#### A masked pattern was here #### +100 40 NULL NULL NULL NULL 100 40 +100 50 100 50 NULL NULL NULL NULL +100 60 NULL NULL 100 60 NULL NULL Modified: hive/trunk/ql/src/test/results/clientpositive/louter_join_ppr.q.out URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/clientpositive/louter_join_ppr.q.out?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/clientpositive/louter_join_ppr.q.out (original) +++ hive/trunk/ql/src/test/results/clientpositive/louter_join_ppr.q.out Tue Sep 25 17:32:09 2012 @@ -615,6 +615,8 @@ STAGE PLANS: condition expressions: 0 {VALUE._col0} {VALUE._col1} 1 {VALUE._col0} {VALUE._col1} + filter mappings: + 0 [1, 1] filter predicates: 0 {(VALUE._col2 = '2008-04-08')} 1 Modified: hive/trunk/ql/src/test/results/clientpositive/outer_join_ppr.q.out URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/clientpositive/outer_join_ppr.q.out?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/clientpositive/outer_join_ppr.q.out (original) +++ hive/trunk/ql/src/test/results/clientpositive/outer_join_ppr.q.out Tue Sep 25 17:32:09 2012 @@ -306,6 +306,8 @@ STAGE PLANS: condition expressions: 0 {VALUE._col0} {VALUE._col1} 1 {VALUE._col0} {VALUE._col1} + filter mappings: + 1 [0, 1] filter predicates: 0 1 {(VALUE._col2 = '2008-04-08')} Modified: hive/trunk/ql/src/test/results/clientpositive/router_join_ppr.q.out URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/clientpositive/router_join_ppr.q.out?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/clientpositive/router_join_ppr.q.out (original) +++ hive/trunk/ql/src/test/results/clientpositive/router_join_ppr.q.out Tue Sep 25 17:32:09 2012 @@ -316,6 +316,8 @@ STAGE PLANS: condition expressions: 0 {VALUE._col0} {VALUE._col1} 1 {VALUE._col0} {VALUE._col1} + filter mappings: + 1 [0, 1] filter predicates: 0 1 {(VALUE._col2 = '2008-04-08')} Modified: hive/trunk/ql/src/test/results/clientpositive/union22.q.out URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/clientpositive/union22.q.out?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/clientpositive/union22.q.out (original) +++ hive/trunk/ql/src/test/results/clientpositive/union22.q.out Tue Sep 25 17:32:09 2012 @@ -119,6 +119,8 @@ STAGE PLANS: condition expressions: 0 {k1} {k2} 1 {_col3} {_col4} + filter mappings: + 0 [1, 1] filter predicates: 0 {(ds = '1')} 1 @@ -146,6 +148,8 @@ STAGE PLANS: condition expressions: 0 {k1} {k2} 1 {_col3} {_col4} + filter mappings: + 0 [1, 1] filter predicates: 0 {(ds = '1')} 1 Modified: hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml Tue Sep 25 17:32:09 2012 @@ -1595,6 +1595,9 @@ + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml Tue Sep 25 17:32:09 2012 @@ -1521,6 +1521,9 @@ + + + @@ -2985,6 +2988,9 @@ + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml Tue Sep 25 17:32:09 2012 @@ -2054,6 +2054,9 @@ + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml Tue Sep 25 17:32:09 2012 @@ -2336,6 +2336,17 @@ + + + + + + 1 + + + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml Tue Sep 25 17:32:09 2012 @@ -2336,6 +2336,13 @@ + + + + + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml Tue Sep 25 17:32:09 2012 @@ -2336,6 +2336,20 @@ + + + + + + 1 + + + + + + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml Tue Sep 25 17:32:09 2012 @@ -3285,6 +3285,23 @@ + + + + + + 1 + + + 2 + + + + + + + + Modified: hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml URL: http://svn.apache.org/viewvc/hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml?rev=1390010&r1=1390009&r2=1390010&view=diff ============================================================================== --- hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml (original) +++ hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml Tue Sep 25 17:32:09 2012 @@ -2533,6 +2533,17 @@ + + + + + + 1 + + + + +