Return-Path:
java.util
pipeline into the client and make decisions based on that data allows us to create sophisticated analytical
applications that can modify their downstream processing based on the results of upstream computations.
The Java API is centered around three interfaces that represent distributed datasets: PCollection
The Java API is centered around three interfaces that represent distributed datasets: PCollection
A PCollection<T>
represents a distributed, unordered collection of elements of type T. For example, we represent a text file as a
-PCollection<String>
object. PCollection<T>
provides a method, parallelDo
, that applies a DoFnPCollection<String>
object. PCollection<T>
provides a method, parallelDo
, that applies a DoFnPCollection<T>
in parallel, and returns an new PCollection<U>
as its result.
A PTable<K, V>
is a sub-interface of PCollection<Pair<K, V>>
that represents a distributed, unordered multimap of its key type K to its value type V.
In addition to the parallelDo operation, PTable provides a groupByKey
operation that aggregates all of the values in the PTable that
have the same key into a single record. It is the groupByKey operation that triggers the sort phase of a MapReduce job. Developers can exercise
fine-grained control over the number of reducers and the partitioning, grouping, and sorting strategies used during the shuffle by providing an instance
-of the GroupingOptions class to the groupByKey
function.
groupByKey
function.
The result of a groupByKey operation is a PGroupedTable<K, V>
object, which is a distributed, sorted map of keys of type K to an IterableparallelDo
processing via DoFns, PGroupedTable provides a combineValues
operation that allows a
-commutative and associative AggregatorAggregator<V>
implementations are provided in the
-Aggregators class.
Finally, PCollection, PTable, and PGroupedTable all support a union
operation, which takes a series of distinct PCollections that all have
the same data type and treats them as a single virtual PCollection.
All of the other data transformation operations supported by the Crunch APIs (aggregations, joins, sorts, secondary sorts, and cogrouping) are implemented -in terms of these four primitives. The patterns themselves are defined in the org.apache.crunch.lib +in terms of these four primitives. The patterns themselves are defined in the org.apache.crunch.lib package and its children, and a few of of the most common patterns have convenience functions defined on the PCollection and PTable interfaces.
DoFns represent the logical computations of your Crunch pipelines. They are designed to be easy to write, easy to test, and easy to deploy @@ -343,7 +343,7 @@ framework won't kill it,
Crunch provides a number of helper methods for working with Hadoop Counters, all named increment
. Counters are an incredibly useful way of keeping track of the state of long running data pipelines and detecting any exceptional conditions that
occur during processing, and they are supported in both the MapReduce-based and in-memory Crunch pipeline contexts. You can retrive the value of the Counters
-in your client code at the end of a MapReduce pipeline by getting them from the StageResult
+in your client code at the end of a MapReduce pipeline by getting them from the StageResult
objects returned by Crunch at the end of a run.
(Note that there was a change in the Counters API from Hadoop 1.0 to Hadoop 2.0, and thus we do not recommend that you work with the
Counter classes directly in yoru Crunch pipelines (the two getCounter
methods that were defined in DoFn are both deprecated) so that you will not be
@@ -362,16 +362,16 @@ will require extra memory settings to ru
memory setting for the DoFn's needs before the job was launched on the cluster.
The Crunch APIs contain a number of useful subclasses of DoFn that handle common data processing scenarios and are easier -to write and test. The top-level org.apache.crunch package contains three +to write and test. The top-level org.apache.crunch package contains three of the most important specializations, which we will discuss now. Each of these specialized DoFn implementations has associated methods on the PCollection, PTable, and PGroupedTable interfaces to support common data processing steps.
-The simplest extension is the FilterFnboolean accept(T input)
.
+
The simplest extension is the FilterFnboolean accept(T input)
.
The FilterFn can be applied to a PCollection<T>
by calling the filter(FilterFn<T> fn)
method, and will return a new PCollection<T>
that only contains
the elements of the input PCollection for which the accept method returned true. Note that the filter function does not include a PType argument in its
signature, because there is no change in the data type of the PCollection when the FilterFn is applied. It is possible to compose new FilterFn
instances by combining multiple FilterFns together using the and
, or
, and not
factory methods defined in the
-FilterFns helper class.
The second extension is the MapFn class, which defines a single abstract method, T map(S input)
.
+FilterFns helper class.
The second extension is the MapFn class, which defines a single abstract method, T map(S input)
.
For simple transform tasks in which every input record will have exactly one output, it's easy to test a MapFn by verifying that a given input returns a
every input record will have exactly one output, it's easy to test a MapFn by verifying that a given input returns a given output.
MapFns are also used in specialized methods on the PCollection and PTable interfaces. PCollection<V>
defines the method
@@ -380,22 +380,22 @@ function that extracts the key (of type
the key be given and constructs a PTableType<K, V>
from the given key type and the PCollection's existing value type. PTable<K, V>
, in turn,
has methods PTable<K1, V> mapKeys(MapFn<K, K1> mapFn)
and PTable<K, V2> mapValues(MapFn<V, V2>)
that handle the common case of converting
just one of the paired values in a PTable instance from one type to another while leaving the other type the same.
The final top-level extension to DoFn is the CombineFn
The final top-level extension to DoFn is the CombineFncombineValues
method defined on the PGroupedTable interface. CombineFns are used to represent the associative operations that can be applied using
the MapReduce Combiner concept in order to reduce the amount data that is shipped over the network during a shuffle.
The CombineFn extension is different from the FilterFn and MapFn classes in that it does not define an abstract method for handling data
beyond the default process
method that any other DoFn would use; rather, extending the CombineFn class signals to the Crunch planner that the logic
contained in this class satisfies the conditions required for use with the MapReduce combiner.
Crunch supports many types of these associative patterns, such as sums, counts, and set unions, via the Aggregator
Crunch supports many types of these associative patterns, such as sums, counts, and set unions, via the Aggregatororg.apache.crunch
package. There are a number of implementations of the Aggregator
-interface defined via static factory methods in the Aggregators class.
Why PTypes Are Necessary, the two type families, the core methods and tuples.
The simplest way to create a new PType<T>
for a data object is to create a derived PType from one of the built-in PTypes for the Avro
and Writable type families. If we have a base PType<S>
, we can create a derived PType<T>
by implementing an input MapFn<S, T>
and an
output MapFn<T, S>
and then calling PTypeFamily.derived(Class<T>, MapFn<S, T> in, MapFn<T, S> out, PType<S> base)
, which will return
-a new PType<T>
. There are examples of derived PTypes in the PTypes class, including
+a new PType<T>
. There are examples of derived PTypes in the PTypes class, including
serialization support for protocol buffers, Thrift records, Java Enums, BigInteger, and UUIDs.
MapReduce developers are familiar with the InputFormat<K, V>
and OutputFormat<K, V>
classes for reading and writing data during
@@ -416,8 +416,8 @@ declares a Iterable<T> read(
or into a DoFn implementation that can use the data read from the source to perform additional transforms on the main input data that is
processed using the DoFn's
process
method (this is how Crunch supports mapside-join operations.)
Support for the most common Source, Target, and SourceTarget implementations are provided by the factory functions declared in the -From (Sources), To (Targets), and -At (SourceTargets) classes in the org.apache.crunch.io +From (Sources), To (Targets), and +At (SourceTargets) classes in the org.apache.crunch.io package.