Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D6C9C17EF3 for ; Fri, 24 Oct 2014 00:33:02 +0000 (UTC) Received: (qmail 54384 invoked by uid 500); 24 Oct 2014 00:33:02 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 54311 invoked by uid 500); 24 Oct 2014 00:33:02 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 54291 invoked by uid 99); 24 Oct 2014 00:33:01 -0000 Received: from reviews-vm.apache.org (HELO reviews.apache.org) (140.211.11.40) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Oct 2014 00:33:01 +0000 Received: from reviews.apache.org (localhost [127.0.0.1]) by reviews.apache.org (Postfix) with ESMTP id BAF5F1DF64C; Fri, 24 Oct 2014 00:33:06 +0000 (UTC) Content-Type: multipart/alternative; boundary="===============1375307889041508299==" MIME-Version: 1.0 Subject: Re: Review Request 27117: HIVE-8457 - MapOperator initialization fails when multiple Spark threads is enabled [Spark Branch] From: "Xuefu Zhang" To: "Xuefu Zhang" Cc: "hive" , "Chao Sun" Date: Fri, 24 Oct 2014 00:33:06 -0000 Message-ID: <20141024003306.1283.32125@reviews.apache.org> X-ReviewBoard-URL: https://reviews.apache.org Auto-Submitted: auto-generated Sender: "Xuefu Zhang" X-ReviewGroup: hive X-ReviewRequest-URL: https://reviews.apache.org/r/27117/ X-Sender: "Xuefu Zhang" References: <20141023235630.1282.486@reviews.apache.org> In-Reply-To: <20141023235630.1282.486@reviews.apache.org> Reply-To: "Xuefu Zhang" X-ReviewRequest-Repository: hive-git --===============1375307889041508299== MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/27117/#review58183 ----------------------------------------------------------- ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java We don't need this, as this class is only used for Spark. ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java Let's give a less conflicting name, such as SPARK_MAP_IO_CONTEXT. Same below. Better define a constant in SparkUtils. ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java We may need to copy other fields in IOContext besides input path. ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java Same as above ql/src/java/org/apache/hadoop/hive/ql/io/HiveContextAwareRecordReader.java We need to copy every field. - Xuefu Zhang On Oct. 23, 2014, 11:56 p.m., Chao Sun wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/27117/ > ----------------------------------------------------------- > > (Updated Oct. 23, 2014, 11:56 p.m.) > > > Review request for hive and Xuefu Zhang. > > > Bugs: HIVE-8457 > https://issues.apache.org/jira/browse/HIVE-8457 > > > Repository: hive-git > > > Description > ------- > > Currently, on the Spark branch, each thread it is bound with a thread-local IOContext, which gets initialized when we generates an input HadoopRDD, and later used in MapOperator, FilterOperator, etc. > And, given the introduction of HIVE-8118, we may have multiple downstream RDDs that share the same input HadoopRDD, and we would like to have the HadoopRDD to be cached, to avoid scanning the same table multiple times. A typical case would be like the following: > inputRDD inputRDD > | | > MT_11 MT_12 > | | > RT_1 RT_2 > Here, MT_11 and MT_12 are MapTran from a splitted MapWork, > and RT_1 and RT_2 are two ReduceTran. Note that, this example is simplified, as we may also have ShuffleTran between MapTran and ReduceTran. > When multiple Spark threads are running, MT_11 may be executed first, and it will ask for an iterator from the HadoopRDD will trigger the creation of the iterator, which in turn triggers the initialization of the IOContext associated with that particular thread. > Now, the problem is: before MT_12 starts executing, it will also ask for an iterator from the > HadoopRDD, and since the RDD is already cached, instead of creating a new iterator, it will just fetch it from the cached result. However, this will skip the initialization of the IOContext associated with this particular thread. And, when MT_12 starts executing, it will try to initialize the MapOperator, but since the IOContext is not initialized, this will fail miserably. > > > Diffs > ----- > > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java 20ea977 > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 00a6f3d > ql/src/java/org/apache/hadoop/hive/ql/io/HiveContextAwareRecordReader.java 58e1ceb > > Diff: https://reviews.apache.org/r/27117/diff/ > > > Testing > ------- > > All multi-insertion related tests are passing on my local machine. > > > Thanks, > > Chao Sun > > --===============1375307889041508299==--