hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From vishnu rao <jaihind...@gmail.com>
Subject Re: after server restart - getting exception - java.io.IOException: Timed out waiting for lock for row
Date Thu, 23 Jun 2016 07:12:34 GMT
Hi guys,

good news. The PUT latencies are back to normal i.e milliseconds rather
than 60 seconds. :)

*For it to work here is what i did:*

(1) stop the app. get count of rows in system stats.
(2) truncate SYSTEM.STATS
(3) run major compaction for main table and index tables
(4) once major compaction completed SYSTEM.STATS is repopulated & has the
same number of rows prior to truncate
(5) start the app latencies back to normal

*What i did when it did NOT work?*

(1) stop the app
(2) truncate SYSTEM.STATS
(3) start the app.

*A little about our environment:*

hortonworks hdp 2.3 - hbase 1.1 with apache phoenix 4.6. - 10 region
servers - about 5 TB compressed (ttl set to 1 week) - 1 table and 3 phoenix
indexes on it



‚Äč

thanks guys for the assistance.


On Thu, Jun 23, 2016 at 2:28 PM, vishnu rao <jaihind213@gmail.com> wrote:

> here are the logs. (pretty much same on all region servers)
>
>
> -------------------------------------------------------------------------------------------------------------
>
> 2016-06-23 03:18:07,187 ERROR
> [B.defaultRpcServer.handler=80,queue=0,port=16020] parallel.BaseTaskRunner:
> Found a failed task because: org.apache.hadoop.hbase.DoNotRetry
>
> IOException: ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
> index metadata.  key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
> java.util.concurrent.ExecutionException:
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
> key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
>         at
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
>
>         at
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
>
>         at
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
>
>         at
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>
>         at
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>
>         at
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
>
>         at
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
>
>         at
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:991)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:987)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3027)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
>
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
>
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
>         at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.
> key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
>         at
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:76)
>
>         at
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>
>         at
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexMaintainers(PhoenixIndexCodec.java:102)
>
>         at
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpdates(PhoenixIndexCodec.java:128)
>
>         at
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexDeletes(PhoenixIndexCodec.java:117)
>
>         at
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addDeleteUpdatesToMap(CoveredColumnsIndexBuilder.java:403)
>
>         at
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addCleanupForCurrentBatch(CoveredColumnsIndexBuilder.java:287)
>
>         at
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addMutationsForBatch(CoveredColumnsIndexBuilder.java:239)
>
>         at
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.batchMutationAndAddUpdates(CoveredColumnsIndexBuilder.java:136)
>
>         at
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.getIndexUpdate(CoveredColumnsIndexBuilder.java:99)
>
>         at
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:133)
>
>         at
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:129)
>
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>         ... 1 more
>
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find
> cached index metadata.  key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
>
>         at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
>
>         at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>
>         at
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexMaintainers(PhoenixIndexCodec.java:100)
>
> che.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008
> (INT10): Unable to find cached index metadata.  key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
> 2016-06-23 03:18:07,193 ERROR
> [B.defaultRpcServer.handler=80,queue=0,port=16020]
> builder.IndexBuildManager: Found a failed index update!
>
> 2016-06-23 03:18:07,194 INFO
> [B.defaultRpcServer.handler=80,queue=0,port=16020]
> util.IndexManagementUtil: Rethrowing
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
> key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
> 2016-06-23 03:18:07,373 INFO
> [B.defaultRpcServer.handler=70,queue=0,port=16020]
> util.IndexManagementUtil: Rethrowing
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
> key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
> 2016-06-23 03:18:07,385 INFO
> [B.defaultRpcServer.handler=0,queue=0,port=16020] util.IndexManagementUtil:
> Rethrowing org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.
> key=8709114428974824565
> region=BIDDING_EVENTS_PROD,\x81\xB4\xE8\x1BN\x81\xB4\xE0\x00\x00\x00\x00\x00\x00\x00\x00,1459280123030.0964a4ebb61c82093783197cb3395799.
> Index update failed
>
> 2016-06-23 03:18:07,432 WARN
> [B.defaultRpcServer.handler=80,queue=0,port=16020] ipc.RpcServer:
> (responseTooSlow):
> {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1466651762838,"responsesize":6135137,"method":"Multi","processingtimems":124593,"client":"
> 10.5.2.119:37665","queuetimems":0,"class":"HRegionServer"}
>
> 2016-06-23 03:18:07,472 WARN
> [B.defaultRpcServer.handler=70,queue=0,port=16020] ipc.RpcServer:
> (responseTooSlow):
> {"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1466651822923,"responsesize":7892321,"method":"Multi","processingtimems":64548,"client":"
> 10.5.2.119:37665","queuetimems":0,"class":"HRegionServer"}
>
> 2016-06-23 03:18:10,089 WARN
> [B.defaultRpcServer.handler=10,queue=0,port=16020] regionserver.HRegion:
> Failed getting lock in batch put,
> row=\xE2\xF6\xA4=\x98(\xD6\x91\xD03\xE3\x1D\xBCrQ\xC9
>
> java.io.IOException: Timed out waiting for lock for row:
> \xE2\xF6\xA4=\x98(\xD6\x91\xD03\xE3\x1D\xBCrQ\xC9
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
>
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
>
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
> 2016-06-23 03:18:10,540 INFO  [sync.1] wal.FSHLog: Slow sync cost: 620 ms,
> current pipeline: [DatanodeInfoWithStorage[10.5.2.225:50010,DS-4fb755f7-6c7d-457b-b088-bb6733d70cca,DISK],
> DatanodeInfoWithStorage[10.5.2.48:50010,DS-1fc61d0b-018a-45c6-b48c-a6404d802deb,DISK],
> DatanodeInfoWithStorage[10.5.2.165:50010
> ,DS-b6355df6-9678-4350-936d-749ad597cbe9,DISK]]
>
>
> -------------------------------------------------------------------------------------------------------------
> On Thu, Jun 23, 2016 at 1:52 PM, rajeshbabu@apache.org <
> chrajeshbabu32@gmail.com> wrote:
>
>> +user@phoenix
>>
>> Hi Vishnu,
>>
>> Can you try restarting the region server where you are seeing the timeout
>> on rowlocks. Would be helpful if you share RS logs.
>> Can you provide the details like what kind of operations done before
>> restart and would you like the share the schemas tables.
>>
>> Thanks,
>> Rajeshbabu.
>>
>> On Thu, Jun 23, 2016 at 9:31 AM, vishnu rao <jaihind213@gmail.com> wrote:
>>
>> > i tried the following:
>> >
>> > 1) truncating system stats did not work.
>> > 2) phoenix.stats.useCurrentTime=false
>> >
>> > but no luck - the wait time increased even further
>> >
>> > On Thu, Jun 23, 2016 at 9:04 AM, vishnu rao <jaihind213@gmail.com>
>> wrote:
>> >
>> > > Hi Biju
>> > >
>> > > Yes local index
>> > >
>> > > . It all started when 1 box crashed.
>> > >
>> > > When I brought up a new one the error was localized to the new box.
>> > >
>> > > After cluster restart - it's spread to all servers.
>> > >
>> > > I shall attempt to clear system stats and increase meta cache size
>> > > Vishnu,
>> > >     Are you using "local index" on any of the tables? We have seen
>> > similar
>> > > issues while using "local index".
>> > >
>> > > On Wed, Jun 22, 2016 at 12:25 PM, vishnu rao <jaihind213@gmail.com>
>> > wrote:
>> > >
>> > > > the server dies when trying to take the thread dump.
>> > > >
>> > > > i believe i am experiencing this bug
>> > > >
>> > > > https://issues.apache.org/jira/browse/PHOENIX-2508
>> > > >
>> > > > On Wed, Jun 22, 2016 at 5:03 PM, Heng Chen <
>> heng.chen.1986@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > which thread hold the row lock? could you dump the jstack with
>> > 'jstack
>> > > -l
>> > > > > pid' ?
>> > > > >
>> > > > > 2016-06-22 16:14 GMT+08:00 vishnu rao <jaihind213@gmail.com>:
>> > > > >
>> > > > > > hi Heng.
>> > > > > >
>> > > > > > 2016-06-22 08:13:42,256 WARN
>> > > > > > [B.defaultRpcServer.handler=32,queue=2,port=16020]
>> > > > regionserver.HRegion:
>> > > > > > Failed getting lock in batch put,
>> > > > > > row=\x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
>> > > > > >
>> > > > > > java.io.IOException: Timed out waiting for lock for row:
>> > > > > > \x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>> > > > > >
>> > > > > > at
>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> > > > > >
>> > > > > > at
>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> > > > > >
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> > > > > >
>> > > > > > at
>> > > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> > > > > >
>> > > > > > at java.lang.Thread.run(Thread.java:745)
>> > > > > >
>> > > > > > On Wed, Jun 22, 2016 at 3:50 PM, Heng Chen <
>> > heng.chen.1986@gmail.com
>> > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > Could you paste the whole jstack and relates rs log?
  It
>> seems
>> > row
>> > > > > write
>> > > > > > > lock was occupied by some thread.  Need more information
to
>> find
>> > > it.
>> > > > > > >
>> > > > > > > 2016-06-22 13:48 GMT+08:00 vishnu rao <jaihind213@gmail.com>:
>> > > > > > >
>> > > > > > > > need some help. this has happened for 2 of my
servers
>> > > > > > > > -------------
>> > > > > > > >
>> > > > > > > > *[B.defaultRpcServer.handler=2,queue=2,port=16020]
>> > > > > > regionserver.HRegion:
>> > > > > > > > Failed getting lock in batch put,
>> > > > > > > > row=a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*
>> > > > > > > >
>> > > > > > > > *java.io.IOException: Timed out waiting for lock
for row:
>> > > > > > > > a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>> > > > > > > >
>> > > > > > > > at
>> > > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> > > > > > > >
>> > > > > > > > at
>> > > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> > > > > > > >
>> > > > > > > > at
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> > > > > > > >
>> > > > > > > > at
>> > > > >
>> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> > > > > > > >
>> > > > > > > > at java.lang.Thread.run(Thread.java:745)
>> > > > > > > >
>> > > > > > > > --
>> > > > > > > > with regards,
>> > > > > > > > ch Vishnu
>> > > > > > > > mash213.wordpress.com
>> > > > > > > > doodle-vishnu.blogspot.in
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > --
>> > > > > > with regards,
>> > > > > > ch Vishnu
>> > > > > > mash213.wordpress.com
>> > > > > > doodle-vishnu.blogspot.in
>> > > > > >
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > with regards,
>> > > > ch Vishnu
>> > > > mash213.wordpress.com
>> > > > doodle-vishnu.blogspot.in
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > with regards,
>> > ch Vishnu
>> > mash213.wordpress.com
>> > doodle-vishnu.blogspot.in
>> >
>>
>
>
>
> --
> with regards,
> ch Vishnu
> mash213.wordpress.com
> doodle-vishnu.blogspot.in
>



-- 
with regards,
ch Vishnu
mash213.wordpress.com
doodle-vishnu.blogspot.in

Mime
  • Unnamed multipart/related (inline, None, 0 bytes)
View raw message