I am using a Secondary Sort for my Hadoop program. My map function emits (Text,NullWritable) where Text contains the composite key and appropriate comparison functions are made and a custom Partitioner . These seem to be working fine.

I have been struggling with the problem that these values are not being received by the reduce function and instead automatically get written to the hdfs in x number of files where x is the number of reducers. I have made sure the reduce function is set to my Reduce function and not identity reduce.

Can someone please explain this behavior and what could be possibly wrong ? 

Thanks & Regards,