hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: OutOfOrderScannerNextException
Date Wed, 09 Sep 2015 06:45:31 GMT
Yes.

  <property>
    <name>hbase.client.scanner.caching</name>
    <value>100</value>
    <description>Number of rows that will be fetched when calling next
    on a scanner if it is not served from (local, client) memory. Higher
    caching values will enable faster scanners but will eat up more memory
    and some calls of next may take longer and longer times when the cache
is empty.
    Do not set this value such that the time between invocations is greater
    than the scanner timeout; i.e.
hbase.client.scanner.timeout.period</description>
  </property>

St.Ack

On Tue, Sep 8, 2015 at 11:18 PM, Li Li <fancyerii@gmail.com> wrote:

> is it possible setting it using hbase-site.xml?
> I can't modify titan codes. it only read hbase configuration file.
>
> On Wed, Sep 9, 2015 at 1:38 AM, Stack <stack@duboce.net> wrote:
> > Are you seeing hiccups in the scans ahead of this exception -- scan next
> > retrying? Are the rows large? Try fetching in smaller batches, smaller
> than
> > 100.
> > St.Ack
> >
> > On Mon, Sep 7, 2015 at 3:56 AM, Li Li <fancyerii@gmail.com> wrote:
> >
> >> I am using titan which use hbase as it's storage engine. The hbase
> >> version is 1.0.0-cdh5.4.4.
> >> it's a full table scan over a large table. Is there any configuration
> >> I can change to tackle this problem.
> >> The exception stack is:
> >>
> >> Exception in thread "main" java.lang.RuntimeException:
> >> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> >> OutOfOrderScannerNextException: was there a rpc timeout?
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
> >>
> >>         at
> >> com.google.common.collect.Iterators$8.computeNext(Iterators.java:686)
> >>
> >>         at
> >>
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> >>
> >>         at
> >>
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> >>
> >>         at
> >>
> com.thinkaurelius.titan.diskstorage.hbase.HBaseKeyColumnValueStore$RowIterator.hasNext(HBaseKeyColumnValueStore.java:289)
> >>
> >>         at
> >>
> com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$2.hasNext(StandardTitanGraph.java:343)
> >>
> >>         at
> >>
> com.thinkaurelius.titan.graphdb.transaction.VertexIterable$1.nextVertex(VertexIterable.java:34)
> >>
> >>         at
> >>
> com.thinkaurelius.titan.graphdb.transaction.VertexIterable$1.next(VertexIterable.java:55)
> >>
> >>         at
> >>
> com.thinkaurelius.titan.graphdb.transaction.VertexIterable$1.next(VertexIterable.java:27)
> >>
> >>         at
> >> com.google.common.collect.Iterators$8.computeNext(Iterators.java:687)
> >>
> >>         at
> >>
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> >>
> >>         at
> >>
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> >>
> >>         at
> >>
> com.mobvoi.knowledgegraph.storage.v2.music.DumpAllNodesAndEdges.main(DumpAllNodesAndEdges.java:34)
> >>
> >> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after
> >> retry of OutOfOrderScannerNextException: was there a rpc timeout?
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:384)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:91)
> >>
> >>         ... 12 more
> >>
> >> Caused by:
> >> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> >> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> >> Expected nextCallSeq: 2 But the nextCallSeq
> >>
> >> got from client: 1; request=scanner_id: 11996 number_of_rows: 100
> >> close_scanner: false next_call_seq: 1
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2144)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)
> >>
> >>         at
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
> >>
> >>         at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >>
> >>         at
> >> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >>
> >>         at java.lang.Thread.run(Thread.java:745)
> >>
> >>
> >>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> Method)
> >>
> >>         at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >>
> >>         at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>
> >>         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>
> >>         at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> >>
> >>         at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:198)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
> >>
> >>         at
> >>
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:336)
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message