hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From HARI KUMAR <harikum2...@gmail.com>
Subject Re: Getting ScannerTimeoutException even after several calls in the specified time limit
Date Tue, 11 Sep 2012 10:30:15 GMT
For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view the
file using tools like "GCViewer".  or use tools like VisualVM to look at
your GC Consumption.

./hari

Add

On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <dpsdce@gmail.com> wrote:

> No i am not doing parallel scans,
>
> * If yes, check the time taken for GC and
> the number of calls that can be served at your end point*.
>
>  could you please tell me how to do that, where can i see the GC logs?
>
>
> On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <harikum2002@gmail.com>wrote:
>
>> Hi,
>>
>> Are u trying to do parallel scans. If yes, check the time taken for GC and
>> the number of calls that can be served at your end point.
>>
>> Best Regards
>> N.Hari Kumar
>>
>> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <dpsdce@gmail.com>
>> wrote:
>>
>> > i tried with a smaller caching i.e 10, it failed again, not its not
>> really
>> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
>> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know how
>> > could i debug this issue ?
>> >
>> >
>> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
>> > 99560ms passed since the last invocation, timeout is currently set to
>> > 60000
>> >         at
>> >
>> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
>> >         at
>> >
>> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
>> >         ... 5 more
>> > Caused by: org.apache.hadoop.hbase.UnknownScannerException:
>> > org.apache.hadoop.hbase.UnknownScannerException: Name:
>> > -8889369042827960647
>> >         at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
>> >         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>> >         at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >         at java.lang.reflect.Method.invoke(Method.java:597)
>> >         at
>> >
>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>> >         at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
>> >
>> >
>> >
>> > On Mon, Sep 10, 2012 at 10:53 PM, Stack <stack@duboce.net> wrote:
>> >
>> > > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <dpsdce@gmail.com>
>> > > wrote:
>> > > > I am facing this exception while iterating over a big table,  by
>> > default
>> > > i
>> > > > have specified caching as 100,
>> > > >
>> > > > i am getting the below exception, even though i checked there are
>> > several
>> > > > calls made to the scanner before it threw this exception, but
>> somehow
>> > its
>> > > > saying 86095ms were passed since last invocation.
>> > > >
>> > > > i also observed that if it set scan.setCaching(false),  it succeeds,
>> > >  could
>> > > > some one please explain or point me to some document as if what's
>> > > happening
>> > > > here and what's the best practices to avoid it.
>> > > >
>> > > >
>> > >
>> > > Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
>> > > You should be able to tell roughly which server is being traversed
>> > > when you get the timeout.  Anything else going on on that server at
>> > > the time?  What version of HBase?
>> > > St.Ack
>> > >
>> >
>> >
>> >
>> > --
>> > Warm Regards,
>> > Dhirendra Pratap
>> > +91. 9717394713
>> >
>>
>>
>>
>> --
>> FROM
>>     HARI KUMAR.N
>>
>
>
>
> --
> Warm Regards,
> Dhirendra Pratap
> +91. 9717394713
>
>
>
>


-- 
FROM
    HARI KUMAR.N

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message