Return-Path: X-Original-To: apmail-spark-reviews-archive@minotaur.apache.org Delivered-To: apmail-spark-reviews-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 093FA193D2 for ; Wed, 13 Apr 2016 20:16:58 +0000 (UTC) Received: (qmail 38182 invoked by uid 500); 13 Apr 2016 20:16:58 -0000 Delivered-To: apmail-spark-reviews-archive@spark.apache.org Received: (qmail 38162 invoked by uid 500); 13 Apr 2016 20:16:58 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 38151 invoked by uid 99); 13 Apr 2016 20:16:57 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Apr 2016 20:16:57 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 903EADFC4F; Wed, 13 Apr 2016 20:16:57 +0000 (UTC) From: davies To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [SPARK-14275][SQL] Reimplement TypedAggregateE... Content-Type: text/plain Message-Id: <20160413201657.903EADFC4F@git1-us-west.apache.org> Date: Wed, 13 Apr 2016 20:16:57 +0000 (UTC) Github user davies commented on a diff in the pull request: https://github.com/apache/spark/pull/12067#discussion_r59617422 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/TypedAggregateExpression.scala --- @@ -19,133 +19,153 @@ package org.apache.spark.sql.execution.aggregate import scala.language.existentials -import org.apache.spark.internal.Logging import org.apache.spark.sql.Encoder -import org.apache.spark.sql.catalyst.InternalRow -import org.apache.spark.sql.catalyst.encoders.{encoderFor, ExpressionEncoder, OuterScopes} +import org.apache.spark.sql.catalyst.analysis.{UnresolvedAttribute, UnresolvedDeserializer, UnresolvedExtractValue} +import org.apache.spark.sql.catalyst.encoders.encoderFor import org.apache.spark.sql.catalyst.expressions._ -import org.apache.spark.sql.catalyst.expressions.aggregate.ImperativeAggregate +import org.apache.spark.sql.catalyst.expressions.aggregate.DeclarativeAggregate import org.apache.spark.sql.expressions.Aggregator import org.apache.spark.sql.types._ object TypedAggregateExpression { - def apply[A, B : Encoder, C : Encoder]( - aggregator: Aggregator[A, B, C]): TypedAggregateExpression = { + def apply[BUF : Encoder, OUT : Encoder]( + aggregator: Aggregator[_, BUF, OUT]): TypedAggregateExpression = { + val bufferEncoder = encoderFor[BUF] + // We will insert the deserializer and function call expression at the bottom of each serializer + // expression while executing `TypedAggregateExpression`, which means multiply serializer + // expressions will all evaluate the same sub-expression at bottom. To avoid the re-evaluating, + // here we always use one single serializer expression to serialize the buffer object into a + // single-field row, no matter whether the encoder is flat or not. We also need to update the + // deserializer to read in all fields from that single-field row. + // TODO: remove this trick after we have better integration of subexpression elimination and --- End diff -- Can we hold this pr a little bit? let me think of how to do subexpression elimination in aggregate. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org