crunch-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nithin Asokan <>
Subject Question about HBaseSourceTarget#getSize()
Date Mon, 16 Mar 2015 22:42:46 GMT
I came across a unique behavior while using HBaseSourceTarget. Suppose I
have a job(from MRPipeline) that reads from HBase using HBaseSourceTarget
and passes all the data to a reduce phase, the number of reducers set by
planner will be equal to 1. The reason being [1]. So, it looks like the
planner assumes there is only about 1Gb of data that's read from the
source, and sets the number of reducers accordingly. However, let's say my
HBase scan is returning very less data or huge amounts of data. The planner
still assigns 1 reducer(crunch.bytes.per.reduce.task=1Gb). What more
interesting is, if there are dependent jobs, the planner will set the
number of reducers based on the initially determined size from HBase source.

As a fix for the above problem, I can set the number of reducers on the
groupByKey(), but that does not offer much flexibility when dealing with
data that is of varying sizes. The other option, is to have a map only job
that reads from HBase and writes to HDFS and have a run(). The next job
will determine the size right, since FileSourceImpl calculates the size on

I noticed the comment on HBaseSourceTarget, and was wondering if there was
anything planned to have it implemented.



View raw message