hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <lhofha...@yahoo.com>
Subject Re: No of rows
Date Wed, 12 Sep 2012 23:48:59 GMT
No. By default each call to ClientScanner.next(...) incurs an RPC call to the HBase server,
which is why it is important to enable scanner caching (as opposed to batching) if you expect
to scan many rows.
By default scanner caching is set to 1.



________________________________
 From: Mohit Anchlia <mohitanchlia@gmail.com>
To: user@hbase.apache.org 
Sent: Wednesday, September 12, 2012 4:29 PM
Subject: Re: No of rows
 
But when resultscanner executes wouldn't it already query the servers for
all the rows matching the startkey? I am tyring to avoid reading all the
blocks from the file system that matches the keys.

On Wed, Sep 12, 2012 at 3:59 PM, Doug Meil <doug.meil@explorysmedical.com>wrote:

>
> Hi there,
>
> If you're talking about stopping a scan after X rows (as opposed to the
> batching), but break out of the ResultScanner loop after X rows.
>
> http://hbase.apache.org/book.html#data_model_operations
>
> You can either add a ColumnFamily to a scan, or add specific attributes
> (I.e., "cf:column") to a scan.
>
>
>
>
> On 9/12/12 6:50 PM, "Mohit Anchlia" <mohitanchlia@gmail.com> wrote:
>
> >I am using client 0.90.5 jar
> >
> >Is there a way to limit how many rows can be fetched in one scan call?
> >
> >Similarly is there something for colums?
>
>
>
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message