hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Something Something <mailinglist...@gmail.com>
Subject Merging files
Date Wed, 31 Jul 2013 05:26:30 GMT

One of our pig scripts creates over 500 small part files.  To save on
namespace, we need to cut down the # of files, so instead of saving 500
small files we need to merge them into 50.  We tried the following:

1)  When we set parallel number to 50, the Pig script takes a long time -
for obvious reasons.
2)  If we use Hadoop Streaming, it puts some garbage values into the key
3)  We wrote our own Map Reducer program that reads these 500 small part
files & uses 50 reducers.  Basically, the Mappers simply write the line &
reducers loop thru values & write them out.  We set
job.setOutputKeyClass(NullWritable.class) so that the key is not written to
the output file.  This is performing better than Pig.  Actually Mappers run
very fast, but Reducers take some time to complete, but this approach seems
to be working well.

Is there a better way to do this?  What strategy can you think of to
increase speed of reducers.

Any help in this regard will be greatly appreciated.  Thanks.

View raw message