orc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From omalley <...@git.apache.org>
Subject [GitHub] orc pull request: ORC-52. Create ORC InputFormat and OutputFormat implementa...
Date Tue, 31 May 2016 23:45:27 GMT
Github user omalley commented on a diff in the pull request:

    https://github.com/apache/orc/pull/27#discussion_r65282144
  
    --- Diff: java/mapreduce/src/test/org/apache/orc/mapreduce/TestMrUnit.java ---
    @@ -0,0 +1,202 @@
    +/**
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.orc.mapreduce;
    +
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.NullWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.serializer.Deserializer;
    +import org.apache.hadoop.io.serializer.Serialization;
    +import org.apache.hadoop.io.serializer.Serializer;
    +import org.apache.hadoop.io.serializer.WritableSerialization;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapreduce.Mapper;
    +import org.apache.hadoop.mapreduce.Reducer;
    +import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver;
    +import org.apache.orc.TypeDescription;
    +import org.apache.orc.mapred.OrcKey;
    +import org.apache.orc.mapred.OrcStruct;
    +import org.apache.orc.mapred.OrcValue;
    +import org.junit.Test;
    +
    +import java.io.DataInputStream;
    +import java.io.DataOutputStream;
    +import java.io.IOException;
    +import java.io.InputStream;
    +import java.io.OutputStream;
    +import java.util.Iterator;
    +
    +public class TestMrUnit {
    +  JobConf conf = new JobConf();
    +
    +  /**
    +   * Split the input struct into its two parts.
    +   */
    +  public static class MyMapper
    +      extends Mapper<NullWritable, OrcStruct, OrcKey, OrcValue> {
    +    private OrcKey keyWrapper = new OrcKey();
    +    private OrcValue valueWrapper = new OrcValue();
    +
    +    @Override
    +    protected void map(NullWritable key,
    +                       OrcStruct value,
    +                       Context context
    +                       ) throws IOException, InterruptedException {
    +      keyWrapper.key = value.getFieldValue(0);
    +      valueWrapper.value = value.getFieldValue(1);
    +      context.write(keyWrapper, valueWrapper);
    +    }
    +  }
    +
    +  /**
    +   * Glue the key and values back together.
    +   */
    +  public static class MyReducer
    +      extends Reducer<OrcKey, OrcValue, NullWritable, OrcStruct> {
    +    private OrcStruct output = new OrcStruct(TypeDescription.fromString
    +        ("struct<first:struct<x:int,y:int>,second:struct<z:string>>"));
    +    private final NullWritable nada = NullWritable.get();
    +
    +    @Override
    +    protected void reduce(OrcKey key,
    +                          Iterable<OrcValue> values,
    +                          Context context
    +                          ) throws IOException, InterruptedException {
    +      output.setFieldValue(0, key.key);
    +      for(OrcValue value: values) {
    +        output.setFieldValue(1, value.value);
    +        context.write(nada, output);
    +      }
    +    }
    +  }
    +
    +  /**
    +   * This class is intended to support MRUnit's object copying for input and
    +   * output objects.
    +   *
    +   * Real mapreduce contexts should NEVER use this class.
    --- End diff --
    
    My goal with OrcKey and OrcValue was to encode the respective types once in the JobConf
rather than encode it per a value. They let you control the key and value types of the shuffle
with knobs that don't conflict with either the input or output.
    
    So if you are shuffling with two OrcStructs, you would define your JobConf like (filling
in the "..." with the appropriate fields):
    
    orc.mapred.key.type=struct<...>
    orc.mapred.value.type=struct<vals:array<..>
    mapreduce.map.output.key.class=OrcKey
    mapreduce.map.output.value.class=OrcValue
    
    I guess that I should actually improve that to:
    
    orc.mapred.input.type=struct<...>
    orc.mapred.output.type=struct<...>
    orc.mapred.map.output.key.type=struct<...>
    orc.mapred.map.output.value.type=struct<...>
    mapreduce.map.output.key.class=OrcKey
    mapreduce.map.output.value.class=OrcValue
    
    The orc.mapred.input.type setting is only necessary if your application wants to use the
schema evolution to convert to a specific type. The orc.mapred.output.type would control the
schema of the output format. 
    
    Does that make sense?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message