Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 2B540200C3C for ; Sun, 12 Feb 2017 02:51:30 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 29DE6160B5D; Sun, 12 Feb 2017 01:51:30 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 7BB62160B5B for ; Sun, 12 Feb 2017 02:51:29 +0100 (CET) Received: (qmail 23972 invoked by uid 500); 12 Feb 2017 01:51:24 -0000 Mailing-List: contact dev-help@drill.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@drill.apache.org Delivered-To: mailing list dev@drill.apache.org Received: (qmail 22024 invoked by uid 99); 12 Feb 2017 01:51:23 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 12 Feb 2017 01:51:23 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 54813F2183; Sun, 12 Feb 2017 01:51:23 +0000 (UTC) From: paul-rogers To: dev@drill.apache.org Reply-To: dev@drill.apache.org References: In-Reply-To: Subject: [GitHub] drill pull request #729: Drill 1328: Support table statistics for Parquet Content-Type: text/plain Message-Id: <20170212015123.54813F2183@git1-us-west.apache.org> Date: Sun, 12 Feb 2017 01:51:23 +0000 (UTC) archived-at: Sun, 12 Feb 2017 01:51:30 -0000 Github user paul-rogers commented on a diff in the pull request: https://github.com/apache/drill/pull/729#discussion_r100680828 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/statistics/StatisticsAggBatch.java --- @@ -0,0 +1,256 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill.exec.physical.impl.statistics; + +import com.google.common.collect.Lists; +import com.sun.codemodel.JExpr; +import org.apache.drill.common.expression.ErrorCollector; +import org.apache.drill.common.expression.ErrorCollectorImpl; +import org.apache.drill.common.expression.FunctionCallFactory; +import org.apache.drill.common.expression.LogicalExpression; +import org.apache.drill.common.expression.SchemaPath; +import org.apache.drill.common.expression.ValueExpressions; +import org.apache.drill.exec.exception.ClassTransformationException; +import org.apache.drill.exec.exception.OutOfMemoryException; +import org.apache.drill.exec.exception.SchemaChangeException; +import org.apache.drill.exec.expr.ClassGenerator; +import org.apache.drill.exec.expr.CodeGenerator; +import org.apache.drill.exec.expr.ExpressionTreeMaterializer; +import org.apache.drill.exec.expr.TypeHelper; +import org.apache.drill.exec.expr.ValueVectorWriteExpression; +import org.apache.drill.exec.ops.FragmentContext; +import org.apache.drill.exec.physical.config.StatisticsAggregate; +import org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch; +import org.apache.drill.exec.physical.impl.aggregate.StreamingAggTemplate; +import org.apache.drill.exec.physical.impl.aggregate.StreamingAggregator; +import org.apache.drill.exec.planner.physical.StatsAggPrel.OperatorPhase; +import org.apache.drill.exec.record.BatchSchema.SelectionVectorMode; +import org.apache.drill.exec.record.MaterializedField; +import org.apache.drill.exec.record.RecordBatch; +import org.apache.drill.exec.record.TypedFieldId; +import org.apache.drill.exec.store.ImplicitColumnExplorer; +import org.apache.drill.exec.vector.ValueVector; +import org.apache.drill.exec.vector.complex.FieldIdUtil; +import org.apache.drill.exec.vector.complex.MapVector; + +import java.io.IOException; +import java.util.GregorianCalendar; +import java.util.List; +import java.util.TimeZone; + +/** + * TODO: This needs cleanup. Currently the key values are constants and we compare the constants + * for every record. Seems unnecessary. + * + * Example input and output: + * Schema of incoming batch: region_id (VARCHAR), sales_city (VARCHAR), cnt (BIGINT) + * Schema of output: + * "schema" : BIGINT - Schema number. For each schema change this number is incremented. + * "computed" : BIGINT - What time is it computed? + * "columns" : MAP - Column names + * "region_id" : VARCHAR + * "sales_city" : VARCHAR + * "cnt" : VARCHAR + * "statscount" : MAP + * "region_id" : BIGINT - statscount(region_id) - aggregation over all values of region_id + * in incoming batch + * "sales_city" : BIGINT - statscount(sales_city) + * "cnt" : BIGINT - statscount(cnt) + * "nonnullstatcount" : MAP + * "region_id" : BIGINT - nonnullstatcount(region_id) + * "sales_city" : BIGINT - nonnullstatcount(sales_city) + * "cnt" : BIGINT - nonnullstatcount(cnt) + * .... another map for next stats function .... --- End diff -- Note that Drill is columnar; there is overhead in allocating a vector. It seems here we are creating very wide records that are not very deep. This means we create 3, 4 or more value vectors in the stats record for every column in the original data. And, we store just one record in each? This will likely be tremendously inefficient. Vectors have very large amount of overhead. They can also be memory intensive when allocated incorrectly (pre-allocating a vector of a given size.) Would it be better to implement the record as a set of a very few lists (repeated columns.) * schema: Integer * timestamp: BigInt * column_names: Repeated Varchar (index of name is important) * fn1: Repeated BigInt (index of value matches index of column name) * fn2: ... The above makes far more efficient use of vector memory than do the large maps. Plus, unlike the maps, the above can easily handle nested fields. "a.b.c" is a fine VarChar value in a list, it does not work as a key in a map (since the dots are interpreted as creating nested maps.) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. ---