hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Koch <ogd...@googlemail.com>
Subject Re: Controlling TableMapReduceUtil table split points
Date Sun, 06 Jan 2013 17:53:23 GMT
Hi Dhaval,

Good call on the setBatch. I had forgotten about it. Just like changing the
schema it would involve changing the map(...) to reflect the fact that only
part of the user's data is returned in each call but I would not have to
manipulate table splits.

The HBase book does suggest that it's bad practice to use the "logical"
schema of lumping all user data into a single row(*) but I'll do some
testing to see what works.

Thank you,


(*) Chapter 9, section "Tall-Narrow Versus Flat-Wide Tables", 3rd ed., page

On Sun, Jan 6, 2013 at 6:29 PM, Dhaval Shah <prince_mithibai@yahoo.co.in>wrote:

> Another option to avoid the timeout/oome issues is to use scan.setBatch()
> so that the scanner would function normally for small rows but would break
> up large rows in multiple Result objects which you can now use in
> conjunction with scan.setCaching() to control how much data you get back..
> This approach would not need a change in your schema design and would
> ensure that only 1 mapper processes the entire row (but in multiple calls
> to the map function)

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message