Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E3B0610637 for ; Thu, 29 Jan 2015 01:44:24 +0000 (UTC) Received: (qmail 31570 invoked by uid 500); 29 Jan 2015 01:44:25 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 31502 invoked by uid 500); 29 Jan 2015 01:44:25 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 31484 invoked by uid 99); 29 Jan 2015 01:44:24 -0000 Received: from reviews-vm.apache.org (HELO reviews.apache.org) (140.211.11.40) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Jan 2015 01:44:24 +0000 Received: from reviews.apache.org (localhost [127.0.0.1]) by reviews.apache.org (Postfix) with ESMTP id 3640F1D7CCC; Thu, 29 Jan 2015 01:44:20 +0000 (UTC) Content-Type: multipart/alternative; boundary="===============1727943797807054996==" MIME-Version: 1.0 Subject: Re: Review Request 30281: Move parquet serialize implementation to DataWritableWriter to improve write speeds From: "cheng xu" To: "cheng xu" , "Dong Chen" , "Ryan Blue" Cc: "hive" , "Sergio Pena" Date: Thu, 29 Jan 2015 01:44:20 -0000 Message-ID: <20150129014420.25679.88944@reviews.apache.org> X-ReviewBoard-URL: https://reviews.apache.org Auto-Submitted: auto-generated Sender: "cheng xu" X-ReviewGroup: hive X-ReviewRequest-URL: https://reviews.apache.org/r/30281/ X-Sender: "cheng xu" References: <20150128052331.25679.66473@reviews.apache.org> In-Reply-To: <20150128052331.25679.66473@reviews.apache.org> Reply-To: "cheng xu" X-ReviewRequest-Repository: hive-git --===============1727943797807054996== MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit > On Jan. 28, 2015, 5:23 a.m., cheng xu wrote: > > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java, lines 218-225 > > > > > > How about the following code snippet? > > recordConsummer.startField(fieldName, i); > > if(i % 2 == 0){ > > writeValue(keyElement, KeyInspector, fieldType); > > }else{ > > writeValue(valueElement, valueInspector, fieldType); > > } > > recordConsumer.endField(fieldName, i); > > Sergio Pena wrote: > The parquet API does not accept NULL values inside startField/endField. This is why I had to check if key or value are nulls before starting the field. Or in the change I did, we check for null values everywhere, and then call startField/endField on writePrimitive. You can see the TestDataWritableWriter.testMapType() method for how null values should work. > > This is how Parquet adds map value 'key3 = null' > > startGroup(); > startField("key", 0); > addString("key3"); > endField("key", 0); > endGroup(); I see. The parquet does not handle the null value well for the startField& endField methods. Sorry for missing this point. How about this? {noformat} Object elementValue = (i%2)?keyElement:valueElement; if(elementValue == null){ // field can not be NULL continue; } ObjectInspector elementInspector = (i%2)?keyInspector:valueInspector; recordConsummer.startField(fieldName, i); writeValue(elementValue, elementInspector, fieldType); recordConsumer.endField(fieldName, i); {noformat} On Jan. 28, 2015, 5:23 a.m., Sergio Pena wrote: > > Hi Sergio, thank you for your changes. I have a few new comments left. > > Sergio Pena wrote: > Thanks Ferd for your comments. > I'll wait for your feedback before updating the other changes to see how we can make this code better. Thank you for your reply. I prefer the previous one because it matches the method name better. For the WriteMap method, I have one little suggestion for the code. Please see my inline comments. - cheng ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/30281/#review69935 ----------------------------------------------------------- On Jan. 27, 2015, 6:47 p.m., Sergio Pena wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/30281/ > ----------------------------------------------------------- > > (Updated Jan. 27, 2015, 6:47 p.m.) > > > Review request for hive, Ryan Blue, cheng xu, and Dong Chen. > > > Bugs: HIVE-9333 > https://issues.apache.org/jira/browse/HIVE-9333 > > > Repository: hive-git > > > Description > ------- > > This patch moves the ParquetHiveSerDe.serialize() implementation to DataWritableWriter class in order to save time in materializing data on serialize(). > > > Diffs > ----- > > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java ea4109d358f7c48d1e2042e5da299475de4a0a29 > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 9caa4ed169ba92dbd863e4a2dc6d06ab226a4465 > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java 060b1b722d32f3b2f88304a1a73eb249e150294b > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java 41b5f1c3b0ab43f734f8a211e3e03d5060c75434 > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/ParquetRecordWriterWrapper.java e52c4bc0b869b3e60cb4bfa9e11a09a0d605ac28 > ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java a693aff18516d133abf0aae4847d3fe00b9f1c96 > ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetOutputFormat.java 667d3671547190d363107019cd9a2d105d26d336 > ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestParquetSerDe.java 007a665529857bcec612f638a157aa5043562a15 > serde/src/java/org/apache/hadoop/hive/serde2/io/ParquetWritable.java PRE-CREATION > > Diff: https://reviews.apache.org/r/30281/diff/ > > > Testing > ------- > > The tests run were the following: > > 1. JMH (Java microbenchmark) > > This benchmark called parquet serialize/write methods using text writable objects. > > Class.method Before Change (ops/s) After Change (ops/s) > ------------------------------------------------------------------------------- > ParquetHiveSerDe.serialize: 19,113 249,528 -> 19x speed increase > DataWritableWriter.write: 5,033 5,201 -> 3.34% speed increase > > > 2. Write 20 million rows (~1GB file) from Text to Parquet > > I wrote a ~1Gb file in Textfile format, then convert it to a Parquet format using the following > statement: CREATE TABLE parquet STORED AS parquet AS SELECT * FROM text; > > Time (s) it took to write the whole file BEFORE changes: 93.758 s > Time (s) it took to write the whole file AFTER changes: 83.903 s > > It got a 10% of speed inscrease. > > > Thanks, > > Sergio Pena > > --===============1727943797807054996==--