drill-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From paul-rogers <...@git.apache.org>
Subject [GitHub] drill pull request #729: Drill 1328: Support table statistics for Parquet
Date Sun, 12 Feb 2017 01:51:23 GMT
Github user paul-rogers commented on a diff in the pull request:

    --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/statistics/StatisticsAggBatch.java
    @@ -0,0 +1,256 @@
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + * <p/>
    + * http://www.apache.org/licenses/LICENSE-2.0
    + * <p/>
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.drill.exec.physical.impl.statistics;
    +import com.google.common.collect.Lists;
    +import com.sun.codemodel.JExpr;
    +import org.apache.drill.common.expression.ErrorCollector;
    +import org.apache.drill.common.expression.ErrorCollectorImpl;
    +import org.apache.drill.common.expression.FunctionCallFactory;
    +import org.apache.drill.common.expression.LogicalExpression;
    +import org.apache.drill.common.expression.SchemaPath;
    +import org.apache.drill.common.expression.ValueExpressions;
    +import org.apache.drill.exec.exception.ClassTransformationException;
    +import org.apache.drill.exec.exception.OutOfMemoryException;
    +import org.apache.drill.exec.exception.SchemaChangeException;
    +import org.apache.drill.exec.expr.ClassGenerator;
    +import org.apache.drill.exec.expr.CodeGenerator;
    +import org.apache.drill.exec.expr.ExpressionTreeMaterializer;
    +import org.apache.drill.exec.expr.TypeHelper;
    +import org.apache.drill.exec.expr.ValueVectorWriteExpression;
    +import org.apache.drill.exec.ops.FragmentContext;
    +import org.apache.drill.exec.physical.config.StatisticsAggregate;
    +import org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch;
    +import org.apache.drill.exec.physical.impl.aggregate.StreamingAggTemplate;
    +import org.apache.drill.exec.physical.impl.aggregate.StreamingAggregator;
    +import org.apache.drill.exec.planner.physical.StatsAggPrel.OperatorPhase;
    +import org.apache.drill.exec.record.BatchSchema.SelectionVectorMode;
    +import org.apache.drill.exec.record.MaterializedField;
    +import org.apache.drill.exec.record.RecordBatch;
    +import org.apache.drill.exec.record.TypedFieldId;
    +import org.apache.drill.exec.store.ImplicitColumnExplorer;
    +import org.apache.drill.exec.vector.ValueVector;
    +import org.apache.drill.exec.vector.complex.FieldIdUtil;
    +import org.apache.drill.exec.vector.complex.MapVector;
    +import java.io.IOException;
    +import java.util.GregorianCalendar;
    +import java.util.List;
    +import java.util.TimeZone;
    + * TODO: This needs cleanup. Currently the key values are constants and we compare the
    + * for every record. Seems unnecessary.
    + *
    + * Example input and output:
    + * Schema of incoming batch: region_id (VARCHAR), sales_city (VARCHAR), cnt (BIGINT)
    + * Schema of output:
    + *    "schema" : BIGINT - Schema number. For each schema change this number is incremented.
    + *    "computed" : BIGINT - What time is it computed?
    + *    "columns"       : MAP - Column names
    + *       "region_id"  : VARCHAR
    + *       "sales_city" : VARCHAR
    + *       "cnt"        : VARCHAR
    + *    "statscount" : MAP
    + *       "region_id"  : BIGINT - statscount(region_id) - aggregation over all values
of region_id
    + *                      in incoming batch
    + *       "sales_city" : BIGINT - statscount(sales_city)
    + *       "cnt"        : BIGINT - statscount(cnt)
    + *    "nonnullstatcount" : MAP
    + *       "region_id"  : BIGINT - nonnullstatcount(region_id)
    + *       "sales_city" : BIGINT - nonnullstatcount(sales_city)
    + *       "cnt"        : BIGINT - nonnullstatcount(cnt)
    + *   .... another map for next stats function ....
    --- End diff --
    Note that Drill is columnar; there is overhead in allocating a vector. It seems here we
are creating very wide records that are not very deep. This means we create 3, 4 or more value
vectors in the stats record for every column in the original data. And, we store just one
record in each?
    This will likely be tremendously inefficient. Vectors have very large amount of overhead.
They can also be memory intensive when allocated incorrectly (pre-allocating a vector of a
given size.)
    Would it be better to implement the record as a set of a very few lists (repeated columns.)
    * schema: Integer
    * timestamp: BigInt
    * column_names: Repeated Varchar (index of name is important)
    * fn1: Repeated BigInt (index of value matches index of column name)
    * fn2: ...
    The above makes far more efficient use of vector memory than do the large maps.
    Plus, unlike the maps, the above can easily handle nested fields. "a.b.c" is a fine VarChar
value in a list, it does not work as a key in a map (since the dots are interpreted as creating
nested maps.)

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message