drill-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From j...@apache.org
Subject drill git commit: DRILL-2173: Partition queries to drive dynamic pruning
Date Mon, 06 Apr 2015 23:41:42 GMT
Repository: drill
Updated Branches:
  refs/heads/master ca7399001 -> af7a52bee


DRILL-2173: Partition queries to drive dynamic pruning

Adds new interface on the QueryContext as well as individual schemas for exploring partitions of tables.
Adds injectable type for partition explorer for use in UDFs. This is hooked into both to expression
materialization and interpreted evaluation. The FragmentContext throws an exception to tell users to turn on
constant folding if a UDF that uses the PartitionExplorer makes it past planning.

2173 update -Address Chris' review comments.

Change the PartitionExplorer to return an Iterable<String> instead of String[]

Add interface level description to PartitionExplorer and StoragePluginPartitionExplorer.

New inner class in FileSystemPlugin to fulfill the new Iterable interface for partitions.

Formatting/cleanup fixes

Clean up error reporting code in MaxDir UDF. Remove method to get a string from a DrillBuf, as it was already defined in StringFunctionHelpers. Add new utility method to specifically convert a VarCharHolder to a string to remove some boilerplate.

Fixed an errant copy paste in a comment and removed unused imports.

Fix docs in FileSystemPlugin, belongs with the 2173 changes.

Fix references in Javadoc to properly use @link instead of @see.

2173 fixes, correctly return an empty list of sub-partitions if the path requested in the partition explorer interface is a file. Fix a few docs.

More 2173, finishing Chris' comments

2173 update - Add validation for PartitionExplorer injectable in UdfUtiltiers.

small change to fix refactored unit tests.

cleanup for 2173

Fix maxdir UDF so it can compile in runtime generated code as well as the interpreted expression system (needed to fully qualify classes and interfaces). It still fails to execute, as we prevent requesting a schema from a non-root fragment. We do not expect these types of functions to ever be used without constant folding so this should not be an issue.

Update error message in the case where the partition explorer is being used outside of planning.

Adding free marker generated maxdir, imaxdir, mindir and imindir

remove import that violates build checks, fix typo in new test class name

Separate out SubDirectoryList from FileSystemSchemaFactory.

Fix unit test to correctly test all four functions.

Update partition explorer to take List instead of Collection. As the lists are used in parallel it should be explicit that these are expected to be ordered (which Collections do not guarantee).

Drop the extra file generated due to the header in the free marker template and fix a typo and remove an unused import.


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/af7a52be
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/af7a52be
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/af7a52be

Branch: refs/heads/master
Commit: af7a52beeaed565e243d2d54ba75337ab95b924d
Parents: ca73990
Author: Jason Altekruse <altekrusejason@gmail.com>
Authored: Tue Feb 3 12:13:22 2015 -0800
Committer: Jason Altekruse <altekrusejason@gmail.com>
Committed: Mon Apr 6 08:13:26 2015 -0700

----------------------------------------------------------------------
 .../exec/store/hbase/HBaseSchemaFactory.java    |   2 +-
 .../store/hive/schema/HiveSchemaFactory.java    |   2 +-
 .../store/mongo/schema/MongoSchemaFactory.java  |   2 +-
 .../codegen/templates/DirectoryExplorers.java   | 107 +++++++++++++++++++
 .../drill/exec/expr/fn/FunctionConverter.java   |   3 -
 .../fn/interpreter/InterpreterEvaluator.java    |  10 --
 .../apache/drill/exec/ops/FragmentContext.java  |   9 ++
 .../org/apache/drill/exec/ops/QueryContext.java |   7 ++
 .../org/apache/drill/exec/ops/UdfUtilities.java |  24 +++++
 .../apache/drill/exec/store/AbstractSchema.java |  15 ++-
 .../drill/exec/store/PartitionExplorer.java     | 102 ++++++++++++++++++
 .../drill/exec/store/PartitionExplorerImpl.java |  42 ++++++++
 .../exec/store/PartitionNotFoundException.java  |  35 ++++++
 .../exec/store/SchemaPartitionExplorer.java     |  49 +++++++++
 .../drill/exec/store/SubSchemaWrapper.java      |  12 ++-
 .../exec/store/dfs/FileSystemSchemaFactory.java |  21 +++-
 .../drill/exec/store/dfs/SubDirectoryList.java  |  73 +++++++++++++
 .../exec/store/dfs/WorkspaceSchemaFactory.java  |  18 ++++
 .../exec/fn/interp/TestConstantFolding.java     |  48 ++++++---
 .../exec/planner/TestDirectoryExplorerUDFs.java | 106 ++++++++++++++++++
 20 files changed, 650 insertions(+), 37 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseSchemaFactory.java
----------------------------------------------------------------------
diff --git a/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseSchemaFactory.java b/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseSchemaFactory.java
index 7b76092..7a0a64b 100644
--- a/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseSchemaFactory.java
+++ b/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseSchemaFactory.java
@@ -62,7 +62,7 @@ public class HBaseSchemaFactory implements SchemaFactory {
     }
 
     @Override
-    public Schema getSubSchema(String name) {
+    public AbstractSchema getSubSchema(String name) {
       return null;
     }
 

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/schema/HiveSchemaFactory.java
----------------------------------------------------------------------
diff --git a/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/schema/HiveSchemaFactory.java b/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/schema/HiveSchemaFactory.java
index 023517b..0e16e6f 100644
--- a/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/schema/HiveSchemaFactory.java
+++ b/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/schema/HiveSchemaFactory.java
@@ -202,7 +202,7 @@ public class HiveSchemaFactory implements SchemaFactory {
     }
 
     @Override
-    public Schema getSubSchema(String name) {
+    public AbstractSchema getSubSchema(String name) {
       List<String> tables;
       try {
         List<String> dbs = databases.get(DATABASES);

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/schema/MongoSchemaFactory.java
----------------------------------------------------------------------
diff --git a/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/schema/MongoSchemaFactory.java b/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/schema/MongoSchemaFactory.java
index 32c42ba..a227c9a 100644
--- a/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/schema/MongoSchemaFactory.java
+++ b/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/schema/MongoSchemaFactory.java
@@ -136,7 +136,7 @@ public class MongoSchemaFactory implements SchemaFactory {
     }
 
     @Override
-    public Schema getSubSchema(String name) {
+    public AbstractSchema getSubSchema(String name) {
       List<String> tables;
       try {
         if (! schemaMap.containsKey(name)) {

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/codegen/templates/DirectoryExplorers.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/codegen/templates/DirectoryExplorers.java b/exec/java-exec/src/main/codegen/templates/DirectoryExplorers.java
new file mode 100644
index 0000000..85e0842
--- /dev/null
+++ b/exec/java-exec/src/main/codegen/templates/DirectoryExplorers.java
@@ -0,0 +1,107 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+
+<@pp.dropOutputFile />
+
+<@pp.changeOutputFile name="/org/apache/drill/exec/expr/fn/impl/DirectoryExplorers.java" />
+
+<#include "/@includes/license.ftl" />
+
+package org.apache.drill.exec.expr.fn.impl;
+
+import io.netty.buffer.DrillBuf;
+import org.apache.drill.exec.expr.DrillSimpleFunc;
+import org.apache.drill.exec.expr.annotations.FunctionTemplate;
+import org.apache.drill.exec.expr.annotations.Output;
+import org.apache.drill.exec.expr.annotations.Param;
+import org.apache.drill.exec.expr.holders.VarCharHolder;
+
+import javax.inject.Inject;
+
+/**
+ * This file is generated with Freemarker using the template exec/java-exec/src/main/codegen/templates/DirectoryExplorers.java
+ */
+public class DirectoryExplorers {
+  static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(DirectoryExplorers.class);
+
+  <#list [ { "name" : "\"maxdir\"", "functionClassName" : "MaxDir", "comparison" : "compareTo(curr) < 0", "goal" : "maximum", "comparisonType" : "case-sensitive"},
+           { "name" : "\"imaxdir\"", "functionClassName" : "IMaxDir", "comparison" : "compareToIgnoreCase(curr) < 0", "goal" : "maximum", "comparisonType" : "case-insensitive"},
+           { "name" : "\"mindir\"", "functionClassName" : "MinDir", "comparison" : "compareTo(curr) > 0", "goal" : "minimum", "comparisonType" : "case-sensitive"},
+           { "name" : "\"imindir\"", "functionClassName" : "IMinDir", "comparison" : "compareToIgnoreCase(curr) > 0", "goal" : "minimum", "comparisonType" : "case-insensitive"}
+  ] as dirAggrProps>
+
+
+  @FunctionTemplate(name = ${dirAggrProps.name}, scope = FunctionTemplate.FunctionScope.SIMPLE, nulls = FunctionTemplate.NullHandling.INTERNAL)
+  public static class ${dirAggrProps.functionClassName} implements DrillSimpleFunc {
+
+    @Param VarCharHolder schema;
+    @Param  VarCharHolder table;
+    @Output VarCharHolder out;
+    @Inject DrillBuf buffer;
+    @Inject org.apache.drill.exec.store.PartitionExplorer partitionExplorer;
+
+    public void setup() {
+    }
+
+    public void eval() {
+      Iterable<String> subPartitions;
+      try {
+        subPartitions = partitionExplorer.getSubPartitions(
+            org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(schema),
+            org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(table),
+            new java.util.ArrayList<String>(),
+            new java.util.ArrayList<String>());
+      } catch (org.apache.drill.exec.store.PartitionNotFoundException e) {
+        throw new RuntimeException(
+            String.format("Error in %s function: Table %s does not exist in schema %s ",
+                ${dirAggrProps.name},
+                org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(table),
+                org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(schema))
+        );
+      }
+      java.util.Iterator partitionIterator = subPartitions.iterator();
+      if (!partitionIterator.hasNext()) {
+        throw new RuntimeException(
+            String.format("Error in %s function: Table %s in schema %s does not contain sub-partitions.",
+                ${dirAggrProps.name},
+                org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(table),
+                org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(schema)
+            )
+        );
+      }
+      String subPartitionStr = (String) partitionIterator.next();
+      String curr;
+      // find the ${dirAggrProps.goal} directory in the list using a ${dirAggrProps.comparisonType} string comparison
+      while (partitionIterator.hasNext()){
+        curr = (String) partitionIterator.next();
+        if (subPartitionStr.${dirAggrProps.comparison}) {
+          subPartitionStr = curr;
+        }
+      }
+      String[] subPartitionParts = subPartitionStr.split(java.io.File.separator);
+      subPartitionStr = subPartitionParts[subPartitionParts.length - 1];
+      byte[] result = subPartitionStr.getBytes();
+      out.buffer = buffer = buffer.reallocIfNeeded(result.length);
+
+      out.buffer.setBytes(0, subPartitionStr.getBytes(), 0, result.length);
+      out.start = 0;
+      out.end = result.length;
+    }
+  }
+  </#list>
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/FunctionConverter.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/FunctionConverter.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/FunctionConverter.java
index ab121b0..ef8a5b1 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/FunctionConverter.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/FunctionConverter.java
@@ -18,14 +18,12 @@
 package org.apache.drill.exec.expr.fn;
 
 import com.google.common.base.Joiner;
-import io.netty.buffer.DrillBuf;
 
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.StringReader;
 import java.lang.reflect.Field;
 import java.net.URL;
-import java.util.Arrays;
 import java.util.List;
 import java.util.Map;
 
@@ -43,7 +41,6 @@ import org.apache.drill.exec.expr.annotations.Workspace;
 import org.apache.drill.exec.expr.fn.DrillFuncHolder.ValueReference;
 import org.apache.drill.exec.expr.fn.DrillFuncHolder.WorkspaceReference;
 import org.apache.drill.exec.expr.holders.ValueHolder;
-import org.apache.drill.exec.ops.QueryDateTimeInfo;
 import org.apache.drill.exec.ops.UdfUtilities;
 import org.apache.drill.exec.vector.complex.reader.FieldReader;
 import org.apache.drill.exec.vector.complex.writer.BaseWriter.ComplexWriter;

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/interpreter/InterpreterEvaluator.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/interpreter/InterpreterEvaluator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/interpreter/InterpreterEvaluator.java
index 664b12a..e081796 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/interpreter/InterpreterEvaluator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/interpreter/InterpreterEvaluator.java
@@ -34,10 +34,6 @@ import org.apache.drill.common.expression.NullExpression;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.expression.TypedNullConstant;
 import org.apache.drill.common.expression.ValueExpressions;
-import org.apache.drill.common.expression.ValueExpressions.BooleanExpression;
-import org.apache.drill.common.expression.ValueExpressions.DateExpression;
-import org.apache.drill.common.expression.ValueExpressions.TimeExpression;
-import org.apache.drill.common.expression.ValueExpressions.TimeStampExpression;
 import org.apache.drill.common.expression.visitors.AbstractExprVisitor;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.expr.DrillFuncHolderExpr;
@@ -49,13 +45,8 @@ import org.apache.drill.exec.expr.annotations.Output;
 import org.apache.drill.exec.expr.annotations.Param;
 import org.apache.drill.exec.expr.fn.DrillSimpleFuncHolder;
 import org.apache.drill.exec.expr.holders.BitHolder;
-import org.apache.drill.exec.expr.holders.DateHolder;
-import org.apache.drill.exec.expr.holders.NullableBigIntHolder;
 import org.apache.drill.exec.expr.holders.NullableBitHolder;
-import org.apache.drill.exec.expr.holders.TimeHolder;
-import org.apache.drill.exec.expr.holders.TimeStampHolder;
 import org.apache.drill.exec.expr.holders.ValueHolder;
-import org.apache.drill.exec.ops.QueryDateTimeInfo;
 import org.apache.drill.exec.ops.UdfUtilities;
 import org.apache.drill.exec.record.RecordBatch;
 import org.apache.drill.exec.record.VectorAccessible;
@@ -63,7 +54,6 @@ import org.apache.drill.exec.vector.ValueHolderHelper;
 import org.apache.drill.exec.vector.ValueVector;
 
 import javax.inject.Inject;
-import java.lang.reflect.Field;
 import java.lang.reflect.Method;
 
 public class InterpreterEvaluator {

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/ops/FragmentContext.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/ops/FragmentContext.java b/exec/java-exec/src/main/java/org/apache/drill/exec/ops/FragmentContext.java
index a4ac724..18b93e9 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/ops/FragmentContext.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/ops/FragmentContext.java
@@ -35,6 +35,7 @@ import org.apache.drill.exec.expr.CodeGenerator;
 import org.apache.drill.exec.expr.fn.FunctionImplementationRegistry;
 import org.apache.drill.exec.memory.BufferAllocator;
 import org.apache.drill.exec.memory.OutOfMemoryException;
+import org.apache.drill.exec.planner.physical.PlannerSettings;
 import org.apache.drill.exec.proto.BitControl.PlanFragment;
 import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
 import org.apache.drill.exec.proto.ExecProtos.FragmentHandle;
@@ -45,6 +46,7 @@ import org.apache.drill.exec.server.DrillbitContext;
 import org.apache.drill.exec.server.options.FragmentOptionManager;
 import org.apache.drill.exec.server.options.OptionList;
 import org.apache.drill.exec.server.options.OptionManager;
+import org.apache.drill.exec.store.PartitionExplorer;
 import org.apache.drill.exec.work.batch.IncomingBuffers;
 
 import com.google.common.collect.Maps;
@@ -316,4 +318,11 @@ public class FragmentContext implements AutoCloseable, UdfUtilities {
   public DrillBuf getManagedBuffer(int size) {
     return bufferManager.getManagedBuffer(size);
   }
+
+  @Override
+  public PartitionExplorer getPartitionExplorer() {
+    throw new UnsupportedOperationException(String.format("The partition explorer interface can only be used " +
+        "in functions that can be evaluated at planning time. Make sure that the %s configuration " +
+        "option is set to true.", PlannerSettings.CONSTANT_FOLDING.getOptionName()));
+  }
 }

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/ops/QueryContext.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/ops/QueryContext.java b/exec/java-exec/src/main/java/org/apache/drill/exec/ops/QueryContext.java
index 3b51a69..2fa0b18 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/ops/QueryContext.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/ops/QueryContext.java
@@ -36,6 +36,8 @@ import org.apache.drill.exec.rpc.user.UserSession;
 import org.apache.drill.exec.server.DrillbitContext;
 import org.apache.drill.exec.server.options.OptionManager;
 import org.apache.drill.exec.server.options.QueryOptionManager;
+import org.apache.drill.exec.store.PartitionExplorer;
+import org.apache.drill.exec.store.PartitionExplorerImpl;
 import org.apache.drill.exec.store.StoragePluginRegistry;
 
 // TODO except for a couple of tests, this is only created by Foreman
@@ -153,6 +155,11 @@ public class QueryContext implements AutoCloseable, UdfUtilities {
   }
 
   @Override
+  public PartitionExplorer getPartitionExplorer() {
+    return new PartitionExplorerImpl(getRootSchema());
+  }
+
+  @Override
   public void close() throws Exception {
     try {
       if (!closed) {

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/ops/UdfUtilities.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/ops/UdfUtilities.java b/exec/java-exec/src/main/java/org/apache/drill/exec/ops/UdfUtilities.java
index f7a1a04..1cdece1 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/ops/UdfUtilities.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/ops/UdfUtilities.java
@@ -19,6 +19,7 @@ package org.apache.drill.exec.ops;
 
 import com.google.common.collect.ImmutableMap;
 import io.netty.buffer.DrillBuf;
+import org.apache.drill.exec.store.PartitionExplorer;
 
 /**
  * Defines the query state and shared resources available to UDFs through
@@ -34,6 +35,7 @@ public interface UdfUtilities {
       new ImmutableMap.Builder<Class, String>()
           .put(DrillBuf.class, "getManagedBuffer")
           .put(QueryDateTimeInfo.class, "getQueryDateTimeInfo")
+          .put(PartitionExplorer.class, "getPartitionExplorer")
           .build();
 
   /**
@@ -54,4 +56,26 @@ public interface UdfUtilities {
    *           for memory management
    */
   DrillBuf getManagedBuffer();
+
+  /**
+   * A partition explorer allows UDFs to view the sub-partitions below a
+   * particular partition. This allows for the implementation of UDFs to
+   * query against the partition information, without having to read
+   * the actual data contained in the partition. This interface is designed
+   * for UDFs that take only constant inputs, as this interface will only
+   * be useful if we can evaluate the constant UDF at planning time.
+   *
+   * Any function defined to use this interface that is not evaluated
+   * at planning time by the constant folding rule will be querying
+   * the storage plugin for meta-data for each record processed.
+   *
+   * Be sure to check the query plans to see that this expression has already
+   * been evaluated during planning if you write UDFs against this interface.
+   *
+   * See {@link org.apache.drill.exec.expr.fn.impl.DirectoryExplorers} for
+   * example usages of this interface.
+   *
+   * @return - an object for exploring partitions of all available schemas
+   */
+  PartitionExplorer getPartitionExplorer();
 }

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/AbstractSchema.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/AbstractSchema.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/AbstractSchema.java
index 90e3ef4..9477a59 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/AbstractSchema.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/AbstractSchema.java
@@ -34,7 +34,7 @@ import org.apache.drill.exec.planner.logical.CreateTableEntry;
 import com.google.common.base.Joiner;
 import com.google.common.collect.Lists;
 
-public abstract class AbstractSchema implements Schema{
+public abstract class AbstractSchema implements Schema, SchemaPartitionExplorer {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(AbstractSchema.class);
 
   protected final List<String> schemaPath;
@@ -48,6 +48,17 @@ public abstract class AbstractSchema implements Schema{
     this.name = name;
   }
 
+  @Override
+  public Iterable<String> getSubPartitions(String table,
+                                           List<String> partitionColumns,
+                                           List<String> partitionValues
+                                          ) throws PartitionNotFoundException {
+    throw new UnsupportedOperationException(
+        String.format("Schema of type: %s " +
+                      "does not support retrieving sub-partition information.",
+                      this.getClass().getSimpleName()));
+  }
+
   public String getName() {
     return name;
   }
@@ -96,7 +107,7 @@ public abstract class AbstractSchema implements Schema{
   }
 
   @Override
-  public Schema getSubSchema(String name) {
+  public AbstractSchema getSubSchema(String name) {
     return null;
   }
 

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorer.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorer.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorer.java
new file mode 100644
index 0000000..fb0ba67
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorer.java
@@ -0,0 +1,102 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+package org.apache.drill.exec.store;
+
+import java.util.List;
+
+/**
+ * Exposes partition information to UDFs to allow queries to limit reading
+ * partitions dynamically.
+ *
+ * In a Drill query, a specific partition can be read by simply
+ * using a filter on a directory column. For example, if data is partitioned
+ * by year and month using directory names, a particular year/month can be
+ * read with the following query.
+ *
+ * <pre>
+ * select * from dfs.my_workspace.data_directory where dir0 = '2014_01';
+ * </pre>
+ *
+ * This assumes that below data_directory there are sub-directories with
+ * years and month numbers as folder names, and data stored below them.
+ *
+ * This works in cases where the partition column is known, but the current
+ * implementation does not allow the partition information itself to be queried.
+ * An example of such behavior would be a query that should always return the
+ * latest month of data, without having to be updated periodically.
+ * While it is possible to write a query like the one below, it will be very
+ * expensive, as this currently is materialized as a full table scan followed
+ * by an aggregation on the partition dir0 column and finally a filter.
+ *
+ * <pre>
+ * select * from dfs.my_workspace.data_directory where dir0 in
+ *    (select MAX(dir0) from dfs.my_workspace.data_directory);
+ * </pre>
+ *
+ * This interface allows the definition of a UDF to perform the sub-query
+ * on the list of partitions. This UDF can be used at planning time to
+ * prune out all of the unnecessary reads of the previous example.
+ *
+ * <pre>
+ * select * from dfs.my_workspace.data_directory
+ *    where dir0 = maxdir('dfs.my_workspace', 'data_directory');
+ * </pre>
+ *
+ * Look at {@link org.apache.drill.exec.expr.fn.impl.DirectoryExplorers}
+ * for examples of UDFs that use this interface to query against
+ * partition information.
+ */
+public interface PartitionExplorer {
+  /**
+   * For the schema provided,
+   * get a list of sub-partitions of a particular table and the partitions
+   * specified by partition columns and values. Individual storage
+   * plugins will assign specific meaning to the parameters and return
+   * values.
+   *
+   * A return value of an empty list should be given if the partition has
+   * no sub-partitions.
+   *
+   * Note this does cause a collision between empty partitions and leaf partitions,
+   * the interface should be modified if the distinction is meaningful.
+   *
+   * Example: for a filesystem plugin the partition information can be simply
+   * be a path from the root of the given workspace to the desired directory. The
+   * return value should be defined as a list of full paths (again from the root
+   * of the workspace), which can be passed by into this interface to explore
+   * partitions further down. An empty list would be returned if the partition
+   * provided was a file, or an empty directory.
+   *
+   * Note to future devs, keep this doc in sync with
+   * {@link SchemaPartitionExplorer}.
+   *
+   * @param schema schema path, can be complete or relative to the default schema
+   * @param partitionColumns a list of partitions to match
+   * @param partitionValues list of values of each partition (corresponding
+   *                        to the partition column list)
+   * @return list of sub-partitions, will be empty if a there is no further
+   *         level of sub-partitioning below, i.e. hit a leaf partition
+   * @throws PartitionNotFoundException when the partition does not exist in
+   *          the given workspace
+   */
+  Iterable<String> getSubPartitions(String schema,
+                                    String table,
+                                    List<String> partitionColumns,
+                                    List<String> partitionValues)
+      throws PartitionNotFoundException;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorerImpl.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorerImpl.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorerImpl.java
new file mode 100644
index 0000000..024ca09
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionExplorerImpl.java
@@ -0,0 +1,42 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+package org.apache.drill.exec.store;
+
+import net.hydromatic.optiq.SchemaPlus;
+
+import java.util.List;
+
+public class PartitionExplorerImpl implements PartitionExplorer {
+
+  private final SchemaPlus rootSchema;
+
+  public PartitionExplorerImpl(SchemaPlus rootSchema) {
+    this.rootSchema = rootSchema;
+  }
+
+  @Override
+  public Iterable<String> getSubPartitions(String schema,
+                                           String table,
+                                           List<String> partitionColumns,
+                                           List<String> partitionValues
+                                           ) throws PartitionNotFoundException {
+
+    AbstractSchema subSchema = rootSchema.getSubSchema(schema).unwrap(AbstractSchema.class);
+    return subSchema.getSubPartitions(table, partitionColumns, partitionValues);
+  }
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionNotFoundException.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionNotFoundException.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionNotFoundException.java
new file mode 100644
index 0000000..0792c8f
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/PartitionNotFoundException.java
@@ -0,0 +1,35 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+package org.apache.drill.exec.store;
+
+public class PartitionNotFoundException extends Exception {
+
+  public PartitionNotFoundException() { }
+
+  public PartitionNotFoundException(String s) {
+    super(s);
+  }
+
+  public PartitionNotFoundException(Exception ex) {
+    super(ex);
+  }
+
+  public PartitionNotFoundException(String s, Exception ex) {
+    super(s, ex);
+  }
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/SchemaPartitionExplorer.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/SchemaPartitionExplorer.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/SchemaPartitionExplorer.java
new file mode 100644
index 0000000..5281adb
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/SchemaPartitionExplorer.java
@@ -0,0 +1,49 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+package org.apache.drill.exec.store;
+
+import java.util.List;
+
+/**
+ * Exposes partition information for a particular schema.
+ *
+ * For a more explanation of the current use of this interface see
+ * the documentation in {@link PartitionExplorer}.
+ */
+public interface SchemaPartitionExplorer {
+
+  /**
+   * Get a list of sub-partitions of a particular table and the partitions
+   * specified by partition columns and values. Individual storage
+   * plugins will assign specific meaning to the parameters and return
+   * values.
+   *
+   * For more info see docs in {@link PartitionExplorer}.
+   *
+   * @param partitionColumns a list of partitions to match
+   * @param partitionValues list of values of each partition (corresponding
+   *                        to the partition column list)
+   * @return list of sub-partitions, will be empty if a there is no further
+   *         level of sub-partitioning below, i.e. hit a leaf partition
+   * @throws PartitionNotFoundException when the partition does not exist in
+   *          the given workspace
+   */
+  Iterable<String> getSubPartitions(String table,
+                                    List<String> partitionColumns,
+                                    List<String> partitionValues) throws PartitionNotFoundException;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/SubSchemaWrapper.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/SubSchemaWrapper.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/SubSchemaWrapper.java
index 2c0d8b8..c792550 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/SubSchemaWrapper.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/SubSchemaWrapper.java
@@ -18,10 +18,10 @@
 package org.apache.drill.exec.store;
 
 import java.util.Collection;
+import java.util.List;
 import java.util.Set;
 
 import net.hydromatic.optiq.Function;
-import net.hydromatic.optiq.Schema;
 import net.hydromatic.optiq.Table;
 
 import org.apache.drill.exec.planner.logical.CreateTableEntry;
@@ -43,6 +43,14 @@ public class SubSchemaWrapper extends AbstractSchema {
   }
 
   @Override
+  public Iterable<String> getSubPartitions(String table,
+                                           List<String> partitionColumns,
+                                           List<String> partitionValues
+  ) throws PartitionNotFoundException {
+    return getDefaultSchema().getSubPartitions(table, partitionColumns, partitionValues);
+  }
+
+  @Override
   public AbstractSchema getDefaultSchema() {
     return innerSchema.getDefaultSchema();
   }
@@ -63,7 +71,7 @@ public class SubSchemaWrapper extends AbstractSchema {
   }
 
   @Override
-  public Schema getSubSchema(String name) {
+  public AbstractSchema getSubSchema(String name) {
     return innerSchema.getSubSchema(name);
   }
 

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
index 4a3eba9..44132d0 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
@@ -17,24 +17,27 @@
  */
 package org.apache.drill.exec.store.dfs;
 
+import java.io.IOException;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
 import net.hydromatic.optiq.Function;
-import net.hydromatic.optiq.Schema;
 import net.hydromatic.optiq.SchemaPlus;
 import net.hydromatic.optiq.Table;
 
 import org.apache.drill.exec.planner.logical.CreateTableEntry;
 import org.apache.drill.exec.rpc.user.UserSession;
 import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.PartitionNotFoundException;
 import org.apache.drill.exec.store.SchemaFactory;
 import org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.WorkspaceSchema;
 
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.Maps;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
 
 
 /**
@@ -83,6 +86,20 @@ public class FileSystemSchemaFactory implements SchemaFactory{
     }
 
     @Override
+    public Iterable<String> getSubPartitions(String table,
+                                             List<String> partitionColumns,
+                                             List<String> partitionValues
+                                            ) throws PartitionNotFoundException {
+      List<FileStatus> fileStatuses;
+      try {
+        fileStatuses = defaultSchema.getFS().list(false, new Path(defaultSchema.getDefaultLocation(), table));
+      } catch (IOException e) {
+        throw new PartitionNotFoundException("Error finding partitions for table " + table, e);
+      }
+      return new SubDirectoryList(fileStatuses);
+    }
+
+    @Override
     public boolean showInInformationSchema() {
       return false;
     }
@@ -108,7 +125,7 @@ public class FileSystemSchemaFactory implements SchemaFactory{
     }
 
     @Override
-    public Schema getSubSchema(String name) {
+    public AbstractSchema getSubSchema(String name) {
       return schemaMap.get(name);
     }
 

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/SubDirectoryList.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/SubDirectoryList.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/SubDirectoryList.java
new file mode 100644
index 0000000..0300114
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/SubDirectoryList.java
@@ -0,0 +1,73 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+package org.apache.drill.exec.store.dfs;
+
+import org.apache.drill.exec.store.PartitionExplorer;
+import org.apache.hadoop.fs.FileStatus;
+
+import java.util.Iterator;
+import java.util.List;
+
+public class SubDirectoryList implements Iterable<String>{
+  final List<FileStatus> fileStatuses;
+
+  SubDirectoryList(List<FileStatus> fileStatuses) {
+    this.fileStatuses = fileStatuses;
+  }
+
+  @Override
+  public Iterator<String> iterator() {
+    return new SubDirectoryIterator(fileStatuses.iterator());
+  }
+
+  private class SubDirectoryIterator implements Iterator<String> {
+
+    final Iterator<FileStatus> fileStatusIterator;
+
+    SubDirectoryIterator(Iterator<FileStatus> fileStatusIterator) {
+      this.fileStatusIterator = fileStatusIterator;
+    }
+
+    @Override
+    public boolean hasNext() {
+      return fileStatusIterator.hasNext();
+    }
+
+    @Override
+    public String next() {
+      return fileStatusIterator.next().getPath().toUri().toString();
+    }
+
+    /**
+     * This class is designed specifically for use in conjunction with the
+     * {@link org.apache.drill.exec.store.PartitionExplorer} interface.
+     * This is only designed for accessing partition information, not
+     * modifying it. To avoid confusing users of the interface this
+     * method throws UnsupportedOperationException.
+     *
+     * @throws UnsupportedOperationException - this is not useful here, the
+     *           list being iterated over should not be used in a way that
+     *           removing an element would be meaningful.
+     */
+    @Override
+    public void remove() {
+      throw new UnsupportedOperationException(String.format("Cannot modify partition information through the " +
+          "%s interface.", PartitionExplorer.class.getSimpleName()));
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
index aeff09b..45e9129 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
@@ -19,6 +19,7 @@ package org.apache.drill.exec.store.dfs;
 
 import java.io.IOException;
 import java.io.OutputStream;
+import java.util.Collection;
 import java.util.List;
 import java.util.Set;
 import java.util.regex.Pattern;
@@ -42,6 +43,8 @@ import org.apache.drill.exec.planner.logical.FileSystemCreateTableEntry;
 import org.apache.drill.exec.planner.sql.ExpandingConcurrentMap;
 import org.apache.drill.exec.rpc.user.UserSession;
 import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.PartitionNotFoundException;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
@@ -159,6 +162,21 @@ public class WorkspaceSchemaFactory implements ExpandingConcurrentMap.MapValueFa
       return replaced;
     }
 
+    @Override
+    public Iterable<String> getSubPartitions(String table,
+                                             List<String> partitionColumns,
+                                             List<String> partitionValues
+    ) throws PartitionNotFoundException {
+
+      List<FileStatus> fileStatuses;
+      try {
+        fileStatuses = getFS().list(false, new Path(getDefaultLocation(), table));
+      } catch (IOException e) {
+        throw new PartitionNotFoundException("Error finding partitions for table " + table, e);
+      }
+      return new SubDirectoryList(fileStatuses);
+    }
+
     public boolean viewExists(String viewName) throws Exception {
       Path viewPath = getViewPath(viewName);
       return fs.exists(viewPath);

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/test/java/org/apache/drill/exec/fn/interp/TestConstantFolding.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/interp/TestConstantFolding.java b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/interp/TestConstantFolding.java
index b59be78..b17935a 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/interp/TestConstantFolding.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/interp/TestConstantFolding.java
@@ -18,6 +18,8 @@
 package org.apache.drill.exec.fn.interp;
 
 import org.apache.drill.PlanTestBase;
+import org.apache.drill.exec.util.JsonStringArrayList;
+import org.apache.hadoop.io.Text;
 import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
@@ -35,22 +37,38 @@ public class TestConstantFolding extends PlanTestBase {
   // Unfortunately, the temporary folder with an @Rule annotation cannot be static, this issue
   // has been fixed in a newer version of JUnit
   // http://stackoverflow.com/questions/2722358/junit-rule-temporaryfolder
-  public void createFiles(int smallFileLines, int bigFileLines) throws Exception{
-    File bigFolder = folder.newFolder("bigfile");
-    File bigFile = new File (bigFolder, "bigfile.csv");
-    PrintWriter out = new PrintWriter(bigFile);
-    for (int i = 0; i < bigFileLines; i++ ) {
-      out.println("1,2,3");
+
+  public static class SmallFileCreator {
+
+    private final TemporaryFolder folder;
+
+    public SmallFileCreator(TemporaryFolder folder) {
+      this.folder = folder;
     }
-    out.close();
 
-    File smallFolder = folder.newFolder("smallfile");
-    File smallFile = new File (smallFolder, "smallfile.csv");
-    out = new PrintWriter(smallFile);
-    for (int i = 0; i < smallFileLines; i++ ) {
-      out.println("1,2,3");
+    public void createFiles(int smallFileLines, int bigFileLines) throws Exception{
+      PrintWriter out;
+      for (String fileAndFolderName : new String[]{"bigfile", "BIGFILE_2"}) {
+        File bigFolder = folder.newFolder(fileAndFolderName);
+        File bigFile = new File (bigFolder, fileAndFolderName + ".csv");
+        out = new PrintWriter(bigFile);
+        for (int i = 0; i < bigFileLines; i++ ) {
+          out.println("1,2,3");
+        }
+        out.close();
+      }
+
+      for (String fileAndFolderName : new String[]{"smallfile", "SMALLFILE_2"}) {
+        File smallFolder = folder.newFolder(fileAndFolderName);
+        File smallFile = new File (smallFolder, fileAndFolderName + ".csv");
+        out = new PrintWriter(smallFile);
+        for (int i = 0; i < smallFileLines; i++ ) {
+          out.println("1,2,3");
+        }
+        out.close();
+      }
     }
-    out.close();
+
   }
 
   @Test
@@ -108,7 +126,7 @@ public class TestConstantFolding extends PlanTestBase {
   @Ignore("DRILL-2553")
   @Test
   public void testConstExprFolding_withPartitionPrune_verySmallFiles() throws Exception {
-    createFiles(1, 8);
+    new SmallFileCreator(folder).createFiles(1, 8);
     String path = folder.getRoot().toPath().toString();
     testPlanOneExpectedPatternOneExcluded(
         "select * from dfs.`" + path + "/*/*.csv` where dir0 = concat('small','file')",
@@ -118,7 +136,7 @@ public class TestConstantFolding extends PlanTestBase {
 
   @Test
   public void testConstExprFolding_withPartitionPrune() throws Exception {
-    createFiles(1, 1000);
+    new SmallFileCreator(folder).createFiles(1, 1000);
     String path = folder.getRoot().toPath().toString();
     testPlanOneExpectedPatternOneExcluded(
         "select * from dfs.`" + path + "/*/*.csv` where dir0 = concat('small','file')",

http://git-wip-us.apache.org/repos/asf/drill/blob/af7a52be/exec/java-exec/src/test/java/org/apache/drill/exec/planner/TestDirectoryExplorerUDFs.java
----------------------------------------------------------------------
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/TestDirectoryExplorerUDFs.java b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/TestDirectoryExplorerUDFs.java
new file mode 100644
index 0000000..c2d4136
--- /dev/null
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/TestDirectoryExplorerUDFs.java
@@ -0,0 +1,106 @@
+/*******************************************************************************
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ ******************************************************************************/
+package org.apache.drill.exec.planner;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Lists;
+import org.apache.drill.PlanTestBase;
+import org.apache.drill.exec.fn.interp.TestConstantFolding;
+import org.apache.drill.exec.util.JsonStringArrayList;
+import org.apache.hadoop.io.Text;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+
+public class TestDirectoryExplorerUDFs extends PlanTestBase {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private class ConstantFoldingTestConfig {
+    String funcName;
+    String expectedFolderName;
+    public ConstantFoldingTestConfig(String funcName, String expectedFolderName) {
+      this.funcName = funcName;
+      this.expectedFolderName = expectedFolderName;
+    }
+  }
+
+  @Test
+  public void testConstExprFolding_maxDir0() throws Exception {
+
+    new TestConstantFolding.SmallFileCreator(folder).createFiles(1, 1000);
+    String path = folder.getRoot().toPath().toString();
+
+    test("use dfs.root");
+
+    // Need the suffixes to make the names unique in the directory.
+    // The capitalized name is on the opposite function (imaxdir and mindir)
+    // because they are looking on opposite ends of the list.
+    //
+    // BIGFILE_2 with the capital letter at the start of the name comes
+    // first in the case-sensitive ordering.
+    // SMALLFILE_2 comes last in a case-insensitive ordering because it has
+    // a suffix not found on smallfile.
+    List<ConstantFoldingTestConfig> tests = ImmutableList.<ConstantFoldingTestConfig>builder()
+        .add(new ConstantFoldingTestConfig("maxdir", "smallfile"))
+        .add(new ConstantFoldingTestConfig("imaxdir", "SMALLFILE_2"))
+        .add(new ConstantFoldingTestConfig("mindir", "BIGFILE_2"))
+        .add(new ConstantFoldingTestConfig("imindir", "bigfile"))
+        .build();
+
+    List<String> allFiles = ImmutableList.<String>builder()
+        .add("smallfile")
+        .add("SMALLFILE_2")
+        .add("bigfile")
+        .add("BIGFILE_2")
+        .build();
+
+    String query = "select * from dfs.`" + path + "/*/*.csv` where dir0 = %s('dfs.root','" + path + "')";
+    for (ConstantFoldingTestConfig config : tests) {
+      // make all of the other folders unexpected patterns, except for the one expected in this case
+      List<String> excludedPatterns = Lists.newArrayList();
+      excludedPatterns.addAll(allFiles);
+      excludedPatterns.remove(config.expectedFolderName);
+      // The list is easier to construct programmatically, but the API below takes an array to make it easier
+      // to write a list as a literal array in a typical test definition
+      String[] excludedArray = new String[excludedPatterns.size()];
+
+      testPlanMatchingPatterns(
+          String.format(query, config.funcName),
+          new String[] {config.expectedFolderName},
+          excludedPatterns.toArray(excludedArray));
+    }
+
+    JsonStringArrayList list = new JsonStringArrayList();
+
+    list.add(new Text("1"));
+    list.add(new Text("2"));
+    list.add(new Text("3"));
+
+    testBuilder()
+        .sqlQuery(String.format(query, tests.get(0).funcName))
+        .unOrdered()
+        .baselineColumns("columns", "dir0")
+        .baselineValues(list, tests.get(0).expectedFolderName)
+        .go();
+  }
+
+}


Mime
View raw message