From reviews-return-1018635-archive-asf-public=cust-asf.ponee.io@spark.apache.org Mon Jan 20 06:41:45 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 7C88A18037A for ; Mon, 20 Jan 2020 07:41:45 +0100 (CET) Received: (qmail 50023 invoked by uid 500); 20 Jan 2020 06:41:45 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 50011 invoked by uid 99); 20 Jan 2020 06:41:44 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Jan 2020 06:41:44 +0000 From: GitBox To: reviews@spark.apache.org Subject: [GitHub] [spark] MaxGekk commented on a change in pull request #27287: [SPARK-30530][SQL][FOLLOW-UP] Remove unnecessary codes and fix comments accordingly in UnivocityParser Message-ID: <157950250484.23269.98349078494995474.gitbox@gitbox.apache.org> References: In-Reply-To: Date: Mon, 20 Jan 2020 06:41:44 -0000 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit MaxGekk commented on a change in pull request #27287: [SPARK-30530][SQL][FOLLOW-UP] Remove unnecessary codes and fix comments accordingly in UnivocityParser URL: https://github.com/apache/spark/pull/27287#discussion_r368388359 ########## File path: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/UnivocityParser.scala ########## @@ -203,7 +202,11 @@ class UnivocityParser( } } - private val doParse = if (options.columnPruning && requiredSchema.isEmpty) { + /** + * Parses a single CSV string and turns it into either one resulting row or no row (if the + * the record is malformed). + */ + val parse: String => Option[InternalRow] = if (options.columnPruning && requiredSchema.isEmpty) { Review comment: It can be just normal method: ```scala def parse(input: String): Option[InternalRow] = { if (options.columnPruning && requiredSchema.isEmpty) { // If `columnPruning` enabled and partition attributes scanned only, // `schema` gets empty. Some(InternalRow.empty) } else { // parse if the columnPruning is disabled or requiredSchema is nonEmpty convert(tokenizer.parseLine(input)) } } ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: users@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org