hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John <johnnyenglish...@gmail.com>
Subject Re: HBase Region Server crash if column size become to big
Date Wed, 11 Sep 2013 11:42:24 GMT
Hi,

thanks for your fast answer! with size becoming too big I mean I have one
row with thousands of columns. For example:

myrowkey1 -> column1, column2, column3 ... columnN

What do you mean with "change the batch size"? I try to create a little
java test code to reproduce the problem. It will take a moment




2013/9/11 Jean-Marc Spaggiari <jean-marc@spaggiari.org>

> Hi John,
>
> Just to be sure. What is " the size become too big"? The size of a single
> column within this row? Or the number of columns?
>
> If it's the number of columns, you can change the batch size to get less
> columns in a single call? Can you share the relevant piece of code doing
> the call?
>
> JM
>
>
> 2013/9/11 John <johnnyenglish739@gmail.com>
>
> > Hi,
> >
> > I store a lot of columns for one row key and if the size become to big
> the
> > relevant Region Server crashs if I try to get or scan the row. For
> example
> > if I try to get the relevant row I got this error:
> >
> > 2013-09-11 12:46:43,696 WARN org.apache.hadoop.ipc.HBaseServer:
> > (operationTooLarge): {"processingtimems":3091,"client":"
> 192.168.0.34:52488
> > ","ti$
> >
> > If I try to load the relevant row via Apache Pig and the HBaseStorage
> > Loader (use the scan operation) I got this message and after that the
> > Region Servers crashs:
> >
> > 2013-09-11 10:30:23,542 WARN org.apache.hadoop.ipc.HBaseServer:
> > (responseTooLarge):
> > {"processingtimems":1851,"call":"next(-588368116791418695,
> > 1), rpc version=1, client version=29,$
> >
> > I'm using Cloudera 4.4.0 with 0.94.6-cdh4.4.0
> >
> > Any clues?
> >
> > regards
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message