hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Praveen Bysani <praveen.ii...@gmail.com>
Subject Re: Errors when starting Hbase service
Date Tue, 02 Apr 2013 10:19:54 GMT
After deleting /tmp directory, I also face permission denied errors while
accessing hbase shell too, is there any other /tmp/.. directory that needs
to be provided write permissions for current user?

 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use
io.native.lib.available
java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: Permission
denied
        at
com.kenai.jffi.Foreign$InValidInstanceHolder.getForeign(Foreign.java:90)
        at com.kenai.jffi.Foreign.getInstance(Foreign.java:95)
        at com.kenai.jffi.Library.openLibrary(Library.java:151)
        at com.kenai.jffi.Library.getCachedInstance(Library.java:125)
        at
com.kenai.jaffl.provider.jffi.Library.loadNativeLibraries(Library.java:66)
        at
com.kenai.jaffl.provider.jffi.Library.getNativeLibraries(Library.java:56)
        at
com.kenai.jaffl.provider.jffi.Library.getSymbolAddress(Library.java:35)
        at
com.kenai.jaffl.provider.jffi.Library.findSymbolAddress(Library.java:45)
        at
com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:188)
        at
com.kenai.jaffl.provider.jffi.AsmLibraryLoader.loadLibrary(AsmLibraryLoader.java:110)
        at
com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:31)
        at
com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:25)
        at com.kenai.jaffl.Library.loadLibrary(Library.java:76)
        at
org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider$SingletonHolder.<clinit>(POSIXFactory.java:108)
        at
org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider.getLibC(POSIXFactory.java:112)
        at
org.jruby.ext.posix.BaseNativePOSIX.<init>(BaseNativePOSIX.java:30)
        at org.jruby.ext.posix.LinuxPOSIX.<init>(LinuxPOSIX.java:17)
        at
org.jruby.ext.posix.POSIXFactory.loadLinuxPOSIX(POSIXFactory.java:70)
        at org.jruby.ext.posix.POSIXFactory.loadPOSIX(POSIXFactory.java:31)
        at org.jruby.ext.posix.LazyPOSIX.loadPOSIX(LazyPOSIX.java:29)
        at org.jruby.ext.posix.LazyPOSIX.posix(LazyPOSIX.java:25)
        at org.jruby.ext.posix.LazyPOSIX.isatty(LazyPOSIX.java:155)
        at org.jruby.RubyIO.tty_p(RubyIO.java:1858)
        at org.jruby.RubyIO$i$0$0$tty_p.call(RubyIO$i$0$0$tty_p.gen:65535)
        at
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
        at
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
        at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:63)
        at org.jruby.ast.IfNode.interpret(IfNode.java:111)
        at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
        at
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:75)
        at
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:147)
        at
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:163)
        at
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
        at
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
        at org.jruby.ast.VCallNode.interpret(VCallNode.java:86)
        at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
        at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
        at
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:75)
        at
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
        at
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:171)
        at
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
        at
org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
        at
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148)
        at org.jruby.RubyClass.newInstance(RubyClass.java:806)
        at
org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535)
        at
org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249)
        at
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
        at
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
        at
opt.cloudera.parcels.CDH_minus_4_dot_2_dot_0_minus_1_dot_cdh4_dot_2_dot_0_dot_p0_dot_10.lib.hbase.bin.$_dot_dot_.bin.hirb.__file__(/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb:113)
        at
opt.cloudera.parcels.CDH_minus_4_dot_2_dot_0_minus_1_dot_cdh4_dot_2_dot_0_dot_p0_dot_10.lib.hbase.bin.$_dot_dot_.bin.hirb.load(/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb)
        at org.jruby.Ruby.runScript(Ruby.java:693)
        at org.jruby.Ruby.runScript(Ruby.java:686)
        at org.jruby.Ruby.runNormally(Ruby.java:593)
        at org.jruby.Ruby.runFromMain(Ruby.java:442)
        at org.jruby.Main.doRunFromMain(Main.java:321)
        at org.jruby.Main.internalRun(Main.java:241)
        at org.jruby.Main.run(Main.java:207)
        at org.jruby.Main.run(Main.java:191)
        at org.jruby.Main.main(Main.java:171)
Caused by: java.lang.UnsatisfiedLinkError: Permission denied
        at com.kenai.jffi.Init.loadFromJar(Init.java:151)
        at com.kenai.jffi.Init.load(Init.java:78)
        at
com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
        at com.kenai.jffi.Foreign$InstanceHolder.<clinit>(Foreign.java:45)
        at com.kenai.jffi.Foreign.getInstance(Foreign.java:95)
        at com.kenai.jffi.Internals.getErrnoSaveFunction(Internals.java:44)
        at
com.kenai.jaffl.provider.jffi.StubCompiler.getErrnoSaveFunction(StubCompiler.java:68)
        at
com.kenai.jaffl.provider.jffi.StubCompiler.<clinit>(StubCompiler.java:18)
        at
com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:146)
        ... 50 more
Foreign.java:90:in `getForeign': java.lang.RuntimeException:
java.lang.UnsatisfiedLinkError: Permission denied
        from Foreign.java:95:in `getInstance'
        from Library.java:151:in `openLibrary'
        from Library.java:125:in `getCachedInstance'
        from Library.java:66:in `loadNativeLibraries'
        from Library.java:56:in `getNativeLibraries'
        from Library.java:35:in `getSymbolAddress'
        from Library.java:45:in `findSymbolAddress'
        from DefaultInvokerFactory.java:51:in `createInvoker'
        from Library.java:27:in `getInvoker'
        from NativeInvocationHandler.java:90:in `createInvoker'
        from NativeInvocationHandler.java:74:in `getInvoker'
        from NativeInvocationHandler.java:110:in `invoke'
        from null:-1:in `isatty'
        from BaseNativePOSIX.java:287:in `isatty'
        from LazyPOSIX.java:155:in `isatty'
        from RubyIO.java:1858:in `tty_p'
        from RubyIO$i$0$0$tty_p.gen:65535:in `call'
        from CachingCallSite.java:292:in `cacheAndCall'
        from CachingCallSite.java:135:in `call'
        from CallNoArgNode.java:63:in `interpret'
        from IfNode.java:111:in `interpret'
        from NewlineNode.java:104:in `interpret'
        from ASTInterpreter.java:75:in `INTERPRET_METHOD'
        from InterpretedMethod.java:147:in `call'
        from DefaultMethod.java:163:in `call'
        from CachingCallSite.java:292:in `cacheAndCall'
        from CachingCallSite.java:135:in `call'
        from VCallNode.java:86:in `interpret'
        from NewlineNode.java:104:in `interpret'
        from BlockNode.java:71:in `interpret'
        from ASTInterpreter.java:75:in `INTERPRET_METHOD'
        from InterpretedMethod.java:169:in `call'
        from DefaultMethod.java:171:in `call'
        from CachingCallSite.java:302:in `cacheAndCall'
        from CachingCallSite.java:144:in `callBlock'
        from CachingCallSite.java:148:in `call'
        from RubyClass.java:806:in `newInstance'
        from RubyClass$i$newInstance.gen:65535:in `call'
        from JavaMethod.java:249:in `call'
        from CachingCallSite.java:292:in `cacheAndCall'
        from CachingCallSite.java:135:in `call'
        from
/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb:113:in
`__file__'
        from
/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb:-1:in
`load'
        from Ruby.java:693:in `runScript'
        from Ruby.java:686:in `runScript'
        from Ruby.java:593:in `runNormally'
        from Ruby.java:442:in `runFromMain'
        from Main.java:321:in `doRunFromMain'
        from Main.java:241:in `internalRun'
        from Main.java:207:in `run'
        from Main.java:191:in `run'
        from Main.java:171:in `main'
Caused by:
Init.java:151:in `loadFromJar': java.lang.UnsatisfiedLinkError: Permission
denied
        from Init.java:78:in `load'
        from Foreign.java:49:in `getInstanceHolder'
        from Foreign.java:45:in `<clinit>'
        from Foreign.java:95:in `getInstance'
        from Internals.java:44:in `getErrnoSaveFunction'
        from StubCompiler.java:68:in `getErrnoSaveFunction'
        from StubCompiler.java:18:in `<clinit>'
        from AsmLibraryLoader.java:146:in `generateInterfaceImpl'
        from AsmLibraryLoader.java:110:in `loadLibrary'
        from Provider.java:31:in `loadLibrary'
        from Provider.java:25:in `loadLibrary'
        from Library.java:76:in `loadLibrary'
        from POSIXFactory.java:108:in `<clinit>'
        from POSIXFactory.java:112:in `getLibC'
        from BaseNativePOSIX.java:30:in `<init>'
        from LinuxPOSIX.java:17:in `<init>'
        from POSIXFactory.java:70:in `loadLinuxPOSIX'
        from POSIXFactory.java:31:in `loadPOSIX'
        from LazyPOSIX.java:29:in `loadPOSIX'
        from LazyPOSIX.java:25:in `posix'
        from LazyPOSIX.java:155:in `isatty'
        from RubyIO.java:1858:in `tty_p'
        from RubyIO$i$0$0$tty_p.gen:65535:in `call'
        from CachingCallSite.java:292:in `cacheAndCall'
        from CachingCallSite.java:135:in `call'
        from CallNoArgNode.java:63:in `interpret'
        from IfNode.java:111:in `interpret'
        from NewlineNode.java:104:in `interpret'
        from ASTInterpreter.java:75:in `INTERPRET_METHOD'
        from InterpretedMethod.java:147:in `call'
        from DefaultMethod.java:163:in `call'
        from CachingCallSite.java:292:in `cacheAndCall'
        from CachingCallSite.java:135:in `call'
        from VCallNode.java:86:in `interpret'
        from NewlineNode.java:104:in `interpret'
        from BlockNode.java:71:in `interpret'
        from ASTInterpreter.java:75:in `INTERPRET_METHOD'
        from InterpretedMethod.java:169:in `call'
        from DefaultMethod.java:171:in `call'
        from CachingCallSite.java:302:in `cacheAndCall'
        from CachingCallSite.java:144:in `callBlock'
        from CachingCallSite.java:148:in `call'
        from RubyClass.java:806:in `newInstance'
        from RubyClass$i$newInstance.gen:65535:in `call'
        from JavaMethod.java:249:in `call'
        from CachingCallSite.java:292:in `cacheAndCall'
        from CachingCallSite.java:135:in `call'
        from
/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb:113:in
`__file__'
        from
/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb:-1:in
`load'
        from Ruby.java:693:in `runScript'
        from Ruby.java:686:in `runScript'
        from Ruby.java:593:in `runNormally'
        from Ruby.java:442:in `runFromMain'
        from Main.java:321:in `doRunFromMain'
        from Main.java:241:in `internalRun'
        from Main.java:207:in `run'
        from Main.java:191:in `run'
        from Main.java:171:in `main'



On 2 April 2013 17:55, Praveen Bysani <praveen.iiith@gmail.com> wrote:

> Hi,
>
> I tried deleting /tmp, but the errors still present in the log. After
> deleting /tmp i am having a permission denied error when running pig
> scripts :/ I had to sudo everytime now, eventhough i changed permissions of
> /tmp/hadoop-username. Any ideas ?
>
>
> On 2 April 2013 14:52, Vibhav Mundra <mundra@gmail.com> wrote:
>
>> Just stop the hbase, and the hadoop.
>> Then delete the contents of /tmp folder on all the nodes {
>> master/slaves/regionerservers. }
>>
>> Try retsrating and u should be good to go.
>>
>> -Vibhav
>>
>>
>> On Tue, Apr 2, 2013 at 11:03 AM, Praveen Bysani <praveen.iiith@gmail.com
>> >wrote:
>>
>> > Hi,
>> >
>> > I have setup hbase using cloudera, the version it shows 'HBase
>> > 0.94.2-cdh4.2.0'. In this case both server and regionserver are the same
>> > machine. But during other instances logs show the hostname of another
>> > regionserver with similar errors. Just a note that the master has
>> different
>> > system time from the regionservers, is it an issue ?
>> >
>> > I didn't find any errors while starting the service but there is an
>> error
>> > while shutting down the service. Following is the log from the region
>> > server on the same machine during shutdown,
>> >
>> > 2013-04-02 03:36:54,390 INFO
>> >
>> org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager:
>> > Stopping RegionServerSnapshotManager gracefully.
>> > 2013-04-02 03:36:54,396 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
>> > regions to close
>> > 2013-04-02 03:36:54,397 INFO
>> > org.apache.hadoop.hbase.regionserver.Store: Closed info
>> > 2013-04-02 03:36:54,398 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> > -ROOT-,,0.70236052
>> > 2013-04-02 03:36:54,426 INFO
>> > org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
>> > filter type for
>> > hdfs://
>> >
>> server.epicoders.com:8020/hbase/.META./1028785192/.tmp/1104b09fbaeb41829c2493875d7475c1
>> > :
>> > CompoundBloomFilterWriter
>> > 2013-04-02 03:36:54,558 INFO
>> > org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
>> > NO DeleteFamily was added to HFile
>> > (hdfs://
>> >
>> server.epicoders.com:8020/hbase/.META./1028785192/.tmp/1104b09fbaeb41829c2493875d7475c1
>> > )
>> > 2013-04-02 03:36:54,558 INFO
>> > org.apache.hadoop.hbase.regionserver.Store: Flushed ,
>> > sequenceid=90407, memsize=2.3k, into tmp file
>> > hdfs://
>> >
>> server.epicoders.com:8020/hbase/.META./1028785192/.tmp/1104b09fbaeb41829c2493875d7475c1
>> > 2013-04-02 03:36:54,612 INFO
>> > org.apache.hadoop.hbase.regionserver.Store: Added
>> > hdfs://
>> >
>> server.epicoders.com:8020/hbase/.META./1028785192/info/1104b09fbaeb41829c2493875d7475c1
>> > ,
>> > entries=8, sequenceid=90407, filesize=1.6k
>> > 2013-04-02 03:36:54,669 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush
>> > of ~2.3k/2320, currentsize=0/0 for region .META.,,1.1028785192 in
>> > 273ms, sequenceid=90407, compaction requested=false
>> > 2013-04-02 03:36:54,676 INFO
>> > org.apache.hadoop.hbase.regionserver.Store: Closed info
>> > 2013-04-02 03:36:54,676 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> > .META.,,1.1028785192
>> > 2013-04-02 03:36:54,799 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>> > server.epicoders.com,60020,1364559783898; all regions closed.
>> > 2013-04-02 03:36:54,800 INFO
>> > org.apache.hadoop.hbase.regionserver.wal.HLog:
>> > regionserver60020.logSyncer exiting
>> > 2013-04-02 03:36:54,861 INFO
>> > org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
>> > leases
>> > 2013-04-02 03:36:54,862 INFO
>> > org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
>> > leases
>> > 2013-04-02 03:36:54,864 WARN
>> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
>> > transient ZooKeeper exception:
>> > org.apache.zookeeper.KeeperException$SessionExpiredException:
>> > KeeperErrorCode = Session expired for
>> > /hbase/rs/server.epicoders.com,60020,1364559783898
>> > 2013-04-02 03:36:54,864 INFO
>> > org.apache.hadoop.hbase.util.RetryCounter: Sleeping 2000ms before
>> > retry #1...
>> > 2013-04-02 03:36:56,868 WARN
>> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
>> > transient ZooKeeper exception:
>> > org.apache.zookeeper.KeeperException$SessionExpiredException:
>> > KeeperErrorCode = Session expired for
>> > /hbase/rs/server.epicoders.com,60020,1364559783898
>> > 2013-04-02 03:36:56,871 INFO
>> > org.apache.hadoop.hbase.util.RetryCounter: Sleeping 4000ms before
>> > retry #2...
>> > 2013-04-02 03:36:59,051 INFO
>> > org.apache.hadoop.hbase.regionserver.Leases:
>> > regionserver60020.leaseChecker closing leases
>> > 2013-04-02 03:36:59,054 INFO
>> > org.apache.hadoop.hbase.regionserver.Leases:
>> > regionserver60020.leaseChecker closed leases
>> > 2013-04-02 03:37:00,872 WARN
>> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
>> > transient ZooKeeper exception:
>> > org.apache.zookeeper.KeeperException$SessionExpiredException:
>> > KeeperErrorCode = Session expired for
>> > /hbase/rs/server.epicoders.com,60020,1364559783898
>> > 2013-04-02 03:37:00,873 INFO
>> > org.apache.hadoop.hbase.util.RetryCounter: Sleeping 8000ms before
>> > retry #3...
>> > 2013-04-02 03:37:08,875 WARN
>> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly
>> > transient ZooKeeper exception:
>> > org.apache.zookeeper.KeeperException$SessionExpiredException:
>> > KeeperErrorCode = Session expired for
>> > /hbase/rs/server.epicoders.com,60020,1364559783898
>> > 2013-04-02 03:37:08,876 ERROR
>> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper
>> > delete failed after 3 retries
>> > 2013-04-02 03:37:08,876 WARN
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
>> > ephemeral node
>> > org.apache.zookeeper.KeeperException$SessionExpiredException:
>> > KeeperErrorCode = Session expired for
>> > /hbase/rs/server.epicoders.com,60020,1364559783898
>> >         at
>> > org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>> >         at
>> > org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> >         at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
>> >         at
>> >
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:137)
>> >         at
>> > org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1215)
>> >         at
>> > org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1204)
>> >         at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1068)
>> >         at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:846)
>> >         at java.lang.Thread.run(Thread.java:662)
>> > 2013-04-02 03:37:08,876 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>> > server.epicoders.com,60020,1364559783898; zookeeper connection closed.
>> > 2013-04-02 03:37:08,877 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
>> > exiting
>> > 2013-04-02 03:37:08,879 INFO
>> > org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>> > starting; hbase.shutdown.hook=true;
>> >
>> >
>> fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@7a2431b9
>> > 2013-04-02 03:37:08,879 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
>> > hook
>> > 2013-04-02 03:37:08,879 INFO
>> > org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
>> > shutdown hook thread.
>> > 2013-04-02 03:37:08,879 INFO
>> > org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>> > finished.
>> >
>> >
>> >
>> >
>> > On 2 April 2013 12:31, Ted Yu <yuzhihong@gmail.com> wrote:
>> >
>> > > Have you checked region server log on server.epicoders.com
>> > > ,60020,1364559783898
>> > > around the time NotServingRegionException was seen in master log ?
>> > >
>> > > What version of HBase are you using ?
>> > >
>> > > Thanks
>> > >
>> > >
>> > > On Mon, Apr 1, 2013 at 9:20 PM, Praveen Bysani <
>> praveen.iiith@gmail.com
>> > > >wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > When i try to restart the HBase service i see the following errors
>> in
>> > > > my Hbase Master log,
>> > > >
>> > > >
>> > > > 2013-04-02 03:37:29,713 INFO
>> > > > org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
>> > > > 2013-04-02 03:37:29,797 INFO
>> > > > org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode
>> for
>> > > > /hbase/backup-masters/server.epicoders.com,60000,1364873849167 from
>> > > > backup master directory
>> > > > 2013-04-02 03:37:29,816 WARN
>> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> > > > /hbase/backup-masters/server.epicoders.com,60000,1364873849167
>> already
>> > > > deleted, and this is not a retry
>> > > > 2013-04-02 03:37:29,816 INFO
>> > > > org.apache.hadoop.hbase.master.ActiveMasterManager:
>> > > > Master=server.epicoders.com,60000,1364873849167
>> > > > 2013-04-02 03:37:31,830 WARN org.apache.hadoop.conf.Configuration:
>> > > > fs.default.name is deprecated. Instead, use fs.defaultFS
>> > > > 2013-04-02 03:37:31,848 INFO
>> > > > org.apache.hadoop.hbase.master.SplitLogManager: found 0 orphan tasks
>> > > > and 0 rescan nodes
>> > > > 2013-04-02 03:37:32,349 WARN org.apache.hadoop.conf.Configuration:
>> > > > hadoop.native.lib is deprecated. Instead, use
>> io.native.lib.available
>> > > > 2013-04-02 03:37:32,774 INFO org.apache.hadoop.hbase.master.HMaster:
>> > > > Server active/primary master;
>> > > > server.epicoders.com,60000,1364873849167,
>> sessionid=0x13daf9ed2b90086,
>> > > > cluster-up flag was=false
>> > > > 2013-04-02 03:37:32,817 INFO
>> > > > org.apache.hadoop.hbase.master.snapshot.SnapshotManager: Snapshot
>> > > > feature is not enabled, missing log and hfile cleaners.
>> > > > 2013-04-02 03:37:32,846 INFO
>> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> > > > /hbase/online-snapshot/acquired already exists and this is not a
>> retry
>> > > > 2013-04-02 03:37:32,856 INFO
>> > > > org.apache.hadoop.hbase.procedure.ZKProcedureUtil: Clearing all
>> > > > procedure znodes: /hbase/online-snapshot/acquired
>> > > > /hbase/online-snapshot/reached /hbase/online-snapshot/abort
>> > > > 2013-04-02 03:37:33,095 INFO org.mortbay.log: Logging to
>> > > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > > > org.mortbay.log.Slf4jLog
>> > > > 2013-04-02 03:37:33,175 INFO org.apache.hadoop.http.HttpServer:
>> Added
>> > > > global filter 'safety'
>> > > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> > > > 2013-04-02 03:37:33,178 INFO org.apache.hadoop.http.HttpServer:
>> Added
>> > > > filter static_user_filter
>> > > >
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>> > > > to context master
>> > > > 2013-04-02 03:37:33,178 INFO org.apache.hadoop.http.HttpServer:
>> Added
>> > > > filter static_user_filter
>> > > >
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>> > > > to context static
>> > > > 2013-04-02 03:37:33,200 INFO org.apache.hadoop.http.HttpServer:
>> Jetty
>> > > > bound to port 60010
>> > > > 2013-04-02 03:37:33,200 INFO org.mortbay.log:
>> jetty-6.1.26.cloudera.2
>> > > > 2013-04-02 03:37:33,880 INFO org.mortbay.log: Started
>> > > > SelectChannelConnector@0.0.0.0:60010
>> > > > 2013-04-02 03:37:33,881 INFO
>> > > > org.apache.hadoop.hbase.master.ServerManager: Waiting for region
>> > > > servers count to settle; currently checked in 0, slept for 0 ms,
>> > > > expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms,
>> > > > interval of 1500 ms.
>> > > > 2013-04-02 03:37:34,079 INFO
>> > > > org.apache.hadoop.hbase.master.ServerManager: Registering
>> > > > server=test3.jayeson.com.sg,60020,1364873839936
>> > > > 2013-04-02 03:37:34,084 INFO
>> > > > org.apache.hadoop.hbase.master.ServerManager: Registering
>> > > > server=test2.jayeson.com.sg,60020,1364873841105
>> > > > 2013-04-02 03:37:34,085 INFO
>> > > > org.apache.hadoop.hbase.master.ServerManager: Registering
>> > > > server=server.epicoders.com,60020,1364873849637
>> > > > 2013-04-02 03:37:34,091 INFO
>> > > > org.apache.hadoop.hbase.master.ServerManager: Waiting for region
>> > > > servers count to settle; currently checked in 3, slept for 210 ms,
>> > > > expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms,
>> > > > interval of 1500 ms.
>> > > > 2013-04-02 03:37:34,103 WARN org.apache.hadoop.conf.Configuration:
>> > > > fs.default.name is deprecated. Instead, use fs.defaultFS
>> > > > 2013-04-02 03:37:35,634 INFO
>> > > > org.apache.hadoop.hbase.master.ServerManager: Finished waiting for
>> > > > region servers count to settle; checked in 3, slept for 1752 ms,
>> > > > expecting minimum of 1, maximum of 2147483647, master is running.
>> > > > 2013-04-02 03:37:35,639 INFO
>> > > > org.apache.hadoop.hbase.master.MasterFileSystem: Log folder
>> > > > hdfs://
>> > > >
>> > >
>> >
>> server.epicoders.com:8020/hbase/.logs/server.epicoders.com,60020,1364873849637
>> > > > belongs to an existing region server
>> > > > 2013-04-02 03:37:35,639 INFO
>> > > > org.apache.hadoop.hbase.master.MasterFileSystem: Log folder
>> > > > hdfs://
>> > > >
>> > >
>> >
>> server.epicoders.com:8020/hbase/.logs/test2.jayeson.com.sg,60020,1364873841105
>> > > > belongs to an existing region server
>> > > > 2013-04-02 03:37:35,640 INFO
>> > > > org.apache.hadoop.hbase.master.MasterFileSystem: Log folder
>> > > > hdfs://
>> > > >
>> > >
>> >
>> server.epicoders.com:8020/hbase/.logs/test3.jayeson.com.sg,60020,1364873839936
>> > > > belongs to an existing region server
>> > > > 2013-04-02 03:37:35,640 INFO
>> > > > org.apache.hadoop.hbase.master.MasterFileSystem: No logs to split
>> > > > 2013-04-02 03:37:35,669 INFO org.apache.zookeeper.ZooKeeper:
>> > > > Initiating client connection, connectString=
>> server.epicoders.com:2181
>> > > > sessionTimeout=60000 watcher=hconnection
>> > > > 2013-04-02 03:37:35,671 INFO org.apache.zookeeper.ClientCnxn:
>> Opening
>> > > > socket connection to server server.epicoders.com/<ip-address>:2181.
>> > > > Will not attempt to authenticate using SASL (Unable to locate a
>> login
>> > > > configuration)
>> > > > 2013-04-02 03:37:35,672 INFO org.apache.zookeeper.ClientCnxn: Socket
>> > > > connection established to server.epicoders.com/<ip-address>:2181,
>> > > > initiating session
>> > > > 2013-04-02 03:37:35,673 INFO
>> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The
>> identifier
>> > > > of this process is 1043@server.epicoders.com
>> > > > 2013-04-02 03:37:35,676 INFO org.apache.zookeeper.ClientCnxn:
>> Session
>> > > > establishment complete on server
>> > > > server.epicoders.com/<ip-address>:2181, sessionid =
>> 0x13daf9ed2b90088,
>> > > > negotiated timeout = 60000
>> > > > 2013-04-02 03:37:35,730 INFO
>> > > > org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification
>> of
>> > > > -ROOT-,,0 at address=server.epicoders.com,60020,1364559783898;
>> > > > org.apache.hadoop.hbase.NotServingRegionException:
>> > > > org.apache.hadoop.hbase.NotServingRegionException: Region is not
>> > > > online: -ROOT-,,0
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3211)
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionInfo(HRegionServer.java:1996)
>> > > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>> > > >         at
>> > > >
>> > >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > > >         at
>> > > >
>> > >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > > >         at java.lang.reflect.Method.invoke(Method.java:597)
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
>> > > >
>> > > > 2013-04-02 03:37:35,733 INFO
>> > > > org.apache.hadoop.hbase.catalog.RootLocationEditor: Unsetting ROOT
>> > > > region location in ZooKeeper
>> > > > 2013-04-02 03:37:36,654 INFO
>> > > > org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: Handling
>> > > > OPENED event for -ROOT-,,0.70236052 from
>> > > > test3.jayeson.com.sg,60020,1364873839936; deleting unassigned node
>> > > > 2013-04-02 03:37:36,663 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager: The master has
>> > > > opened the region -ROOT-,,0.70236052 that was online on
>> > > > test3.jayeson.com.sg,60020,1364873839936
>> > > > 2013-04-02 03:37:36,683 INFO org.apache.hadoop.hbase.master.HMaster:
>> > > > -ROOT- assigned=1, rit=false,
>> > > > location=test3.jayeson.com.sg,60020,1364873839936
>> > > > 2013-04-02 03:37:36,797 INFO
>> > > > org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification
>> of
>> > > > .META.,,1 at address=server.epicoders.com,60020,1364559783898;
>> > > > org.apache.hadoop.hbase.NotServingRegionException:
>> > > > org.apache.hadoop.hbase.NotServingRegionException: Region is not
>> > > > online: .META.,,1
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3211)
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionInfo(HRegionServer.java:1996)
>> > > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>> > > >         at
>> > > >
>> > >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > > >         at
>> > > >
>> > >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > > >         at java.lang.reflect.Method.invoke(Method.java:597)
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>> > > >         at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
>> > > >
>> > > >
>> > > > After continuing with same error for a while the log ends with,
>> > > >
>> > > > 2013-04-02 03:37:39,281 INFO
>> > > > org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: Handling
>> > > > OPENED event for .META.,,1.1028785192 from
>> > > > test2.jayeson.com.sg,60020,1364873841105; deleting unassigned node
>> > > > 2013-04-02 03:37:39,328 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager: The master has
>> > > > opened the region .META.,,1.1028785192 that was online on
>> > > > test2.jayeson.com.sg,60020,1364873841105
>> > > > 2013-04-02 03:37:39,335 INFO org.apache.hadoop.hbase.master.HMaster:
>> > > > .META. assigned=2, rit=false,
>> > > > location=test2.jayeson.com.sg,60020,1364873841105
>> > > > 2013-04-02 03:37:39,359 INFO
>> > > > org.apache.hadoop.hbase.catalog.MetaMigrationRemovingHTD: Meta
>> > > > version=0; migrated=true
>> > > > 2013-04-02 03:37:39,359 INFO
>> > > > org.apache.hadoop.hbase.catalog.MetaMigrationRemovingHTD: ROOT/Meta
>> > > > already up-to date with new HRI.
>> > > > 2013-04-02 03:37:39,472 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager: Clean cluster
>> > > > startup. Assigning userregions
>> > > > 2013-04-02 03:37:39,535 INFO
>> > > > org.apache.hadoop.hbase.master.LoadBalancer: Reassigned 2 regions.
2
>> > > > retained the pre-restart assignment.
>> > > > 2013-04-02 03:37:39,535 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning 2
>> > > > region(s) across 3 server(s), retainAssignment=true
>> > > > 2013-04-02 03:37:39,541 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning
>> done
>> > > > 2013-04-02 03:37:39,562 INFO org.apache.hadoop.hbase.master.HMaster:
>> > > > Registered HMaster MXBean
>> > > > 2013-04-02 03:37:39,562 INFO org.apache.hadoop.hbase.master.HMaster:
>> > > > Master has completed initialization
>> > > > 2013-04-02 03:37:39,577 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager:
>> > > > test2.jayeson.com.sg,60020,1364873841105 unassigned znodes=1 of
>> > > > total=1
>> > > > 2013-04-02 03:37:39,578 INFO
>> > > > org.apache.hadoop.hbase.master.AssignmentManager:
>> > > > test3.jayeson.com.sg,60020,1364873839936 unassigned znodes=1 of
>> > > > total=1
>> > > >
>> > > > Finally i could see the service is running fine and can access and
>> > write
>> > > > data into hbase from shell. However i want to know if these errors
>> mean
>> > > > anything or affect any behavior of hbase ?
>> > > >
>> > > > --
>> > > > Regards,
>> > > > Praveen Bysani
>> > > > http://www.praveenbysani.com
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > Regards,
>> > Praveen Bysani
>> > http://www.praveenbysani.com
>> >
>>
>
>
>
> --
> Regards,
> Praveen Bysani
> http://www.praveenbysani.com
>



-- 
Regards,
Praveen Bysani
http://www.praveenbysani.com

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message