hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dhruv <dhru...@gmail.com>
Subject OutputFormat and Reduce Task
Date Thu, 01 Nov 2012 22:45:21 GMT
I'm trying to optimize the performance of my OutputFormat's
implementation. I'm doing things similar to HBase's
TableOutputFormat--sending the reducer's output to a distributed k-v store.
So, the context.write() call basically winds up doing a Put() on the store.

Although I haven't profiled, a sequence of thread dumps on the reduce tasks
reveal that the threads are RUNNABLE and hanging out in the put() and its
subsequent method calls. So, I proceeded to decouple these two by
implementing the producer (context.write()) consumer (RecordWriter.write())
pattern using ExecutorService.

My understanding is that Context.write() calls RecordWriter.write() and
that these two are synchronous calls. The first will block until the second
method completes.Each reduce phase blocks until the context.write()
finishes, so the next reduce on the next key also blocks, making things run
slow in my case. Is this correct? Does this mean that OutputFormat is
instantiated once by the TaskTracker for the Job's reduce logic and all
keys operated on by the reducers get the same instance of the OutputFormat.
Or, is it that for each key operated by the reducer, a new OutputFormat is


View raw message