hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Heng Chen <heng.chen.1...@gmail.com>
Subject Re: After namenode failed, some regions stuck in Closed state
Date Tue, 12 Jan 2016 06:05:28 GMT
After assign manually,  everything is OK now.  Thanks Ted.

2016-01-12 11:37 GMT+08:00 Ted Yu <yuzhihong@gmail.com>:

> Do you see table descriptor (on hdfs) for region
> 4a5c3511dc0b880d063e56042a7da547 ?
>
> Have you run fsck to see if there is any corrupt block(s) ?
>
> Cheers
>
> On Mon, Jan 11, 2016 at 6:52 PM, Heng Chen <heng.chen.1986@gmail.com>
> wrote:
>
> > Some relates region log on RS
> >
> >
> > 2016-01-12 10:45:01,570 INFO
> >  [PriorityRpcServer.handler=14,queue=0,port=16020]
> > regionserver.RSRpcServices: Open
> >
> >
> PIPE.TABLE_CONFIG,\x01\x00\x00\x00\x00\x00,1451875306059.4a5c3511dc0b880d063e56042a7da547.
> > 2016-01-12 10:45:01,573 ERROR
> > [RS_OPEN_REGION-dx-pipe-regionserver3-online:16020-0]
> > handler.OpenRegionHandler: Failed open of
> >
> >
> region=PIPE.TABLE_CONFIG,\x01\x00\x00\x00\x00\x00,1451875306059.4a5c3511dc0b880d063e56042a7da547.,
> > starting to roll back the global memstore size.
> > java.lang.IllegalStateException: Could not instantiate a region instance.
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5836)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6143)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6115)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6071)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6022)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> > at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.reflect.InvocationTargetException
> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> > at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5833)
> > ... 10 more
> > Caused by: java.lang.IllegalArgumentException: Need table descriptor
> > at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:643)
> > at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:620)
> > ... 15 more
> >
> > 2016-01-12 10:42 GMT+08:00 Heng Chen <heng.chen.1986@gmail.com>:
> >
> > > Information from Web UI
> > >
> > > region                                            state
> > >    RIT
> > > 4a5c3511dc0b880d063e56042a7da547
> >
> PIPE.TABLE_CONFIG,\x01\x00\x00\x00\x00\x00,1451875306059.4a5c3511dc0b880d063e56042a7da547.
> > > state=CLOSED, ts=Tue Jan 12 10:18:06 CST 2016 (1243s ago),
> > > server=dx-pipe-regionserver3-online,16020,1452554429647
> > > 1243053
> > >
> > >
> > >
> > >
> > > Some error logs in master
> > >
> > > 2016-01-12 07:18:18,345 ERROR
> > > [PriorityRpcServer.handler=10,queue=0,port=16000]
> > master.MasterRpcServices:
> > > Region server dx-pipe-regionserver4-online,16020,1447236435629
> reported a
> > > fatal error:
> > > ABORTING region server
> dx-pipe-regionserver4-online,16020,1447236435629:
> > > Replay of WAL required. Forcing server shutdown
> > > Cause:
> > > org.apache.hadoop.hbase.DroppedSnapshotException: region:
> > > ape_fenbi_exercise,\xCF\xB7\x9D\x02\x00\x00\x00\x00_\x00\x00\x00\x00
> > >
> >
> ^-\xF0_\x00\x00\x00\x00\x00\x00\x00,,1451863106090.2ee0e6e2baed75e214cc4074ff51d33b.
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2346)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2049)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2011)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1903)
> > >         at
> > > org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1829)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:510)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
> > >         at java.lang.Thread.run(Thread.java:745)
> > > Caused by: java.net.ConnectException: Call From
> > > dx-pipe-regionserver4-online/10.11.51.89 to f04:8020 failed on
> > connection
> > > exception: java.net.ConnectException: Connection refused; For more
> > details
> > > see:  http://wiki.apache.org/hadoop/ConnectionRefused
> > >         at
> > sun.reflect.GeneratedConstructorAccessor112.newInstance(Unknown
> > > Source)
> > >         at
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >         at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> > >         at
> > > org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> > >         at
> > org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> > >         at org.apache.hadoop.ipc.Client.call(Client.java:1415)
> > >         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
> > >         at
> > >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > >         at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
> > >         at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
> > >         at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >         at java.lang.reflect.Method.invoke(Method.java:483)
> > >         at
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> > >         at
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> > >         at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
> > >         at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
> > >         at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >         at java.lang.reflect.Method.invoke(Method.java:483)
> > >         at
> > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
> > >         at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> > >         at
> > > org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
> > >         at
> > >
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
> > >         at
> > >
> >
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
> > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFile$WriterBuilder.build(StoreFile.java:629)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HStore.createWriterInTmp(HStore.java:1007)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:66)
> > >         at
> > > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:920)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2192)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2299)
> > >
> > >
> > >
> > > 2016-01-12 10:36 GMT+08:00 Ted Yu <yuzhihong@gmail.com>:
> > >
> > >> Looks like the picture didn't go through.
> > >>
> > >> Consider using third party image hosting site.
> > >>
> > >> Pastebinning server log would help.
> > >>
> > >> Cheers
> > >>
> > >> On Mon, Jan 11, 2016 at 6:28 PM, Heng Chen <heng.chen.1986@gmail.com>
> > >> wrote:
> > >>
> > >> > [image: 内嵌图片 1]
> > >> >
> > >> >
> > >> > HBASE-1.1.1  hadoop-2.5.0
> > >> >
> > >> >
> > >> > I want to recovery this regions, how?  ask for help.
> > >> >
> > >>
> > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message