Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 88ED2F25A for ; Tue, 2 Apr 2013 12:00:35 +0000 (UTC) Received: (qmail 51969 invoked by uid 500); 2 Apr 2013 12:00:33 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 51102 invoked by uid 500); 2 Apr 2013 12:00:32 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 51083 invoked by uid 99); 2 Apr 2013 12:00:32 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Apr 2013 12:00:32 +0000 X-ASF-Spam-Status: No, hits=1.9 required=5.0 tests=FH_HAS_XAIMC,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of huaxiang@asiainfo-linkage.com designates 114.255.20.22 as permitted sender) Received: from [114.255.20.22] (HELO asiainfo-linkage.com) (114.255.20.22) by apache.org (qpsmtpd/0.29) with SMTP; Tue, 02 Apr 2013 12:00:28 +0000 Received: from huax([58.212.237.13]) by asiainfo-linkage.com(AIMCPRO 1.0.0.0) with SMTP id jm18515acb76; Tue, 02 Apr 2013 20:00:01 +0800 From: "huaxiang" To: Subject: eclipse plugin for hadoop2.0.0 Date: Tue, 2 Apr 2013 20:00:02 +0800 Message-ID: <000001ce2f99$9bff1390$d3fd3ab0$@asiainfo-linkage.com> MIME-Version: 1.0 Content-Type: text/plain; charset="gb2312" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Outlook 14.0 Thread-Index: Ac4vmYJyZsG22BK7Q72CbZAFQn33iA== Content-Language: zh-cn X-AIMC-AUTH: huaxiang X-AIMC-MAILFROM: huaxiang@asiainfo-linkage.com X-AIMC-Msg-ID: QbfszW4B X-Virus-Checked: Checked by ClamAV on apache.org Hi=A3=AC I have download 'hadoop-0.20.3-dev-eclipse-plugin.jar', seems not = work. my hadoop is 2.0.0, where can I find the eclipse plugin for = hadoop2.0.0? many thanks! beatls -----=D3=CA=BC=FE=D4=AD=BC=FE----- =B7=A2=BC=FE=C8=CB: Praveen Bysani [mailto:praveen.iiith@gmail.com]=20 =B7=A2=CB=CD=CA=B1=BC=E4: 2013=C4=EA4=D4=C22=C8=D5 18:20 =CA=D5=BC=FE=C8=CB: user@hbase.apache.org =D6=F7=CC=E2: Re: Errors when starting Hbase service After deleting /tmp directory, I also face permission denied errors = while accessing hbase shell too, is there any other /tmp/.. directory that = needs to be provided write permissions for current user? WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: Permission denied at com.kenai.jffi.Foreign$InValidInstanceHolder.getForeign(Foreign.java:90) at com.kenai.jffi.Foreign.getInstance(Foreign.java:95) at com.kenai.jffi.Library.openLibrary(Library.java:151) at com.kenai.jffi.Library.getCachedInstance(Library.java:125) at com.kenai.jaffl.provider.jffi.Library.loadNativeLibraries(Library.java:66= ) at com.kenai.jaffl.provider.jffi.Library.getNativeLibraries(Library.java:56)= at com.kenai.jaffl.provider.jffi.Library.getSymbolAddress(Library.java:35) at com.kenai.jaffl.provider.jffi.Library.findSymbolAddress(Library.java:45) at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmL= ibr aryLoader.java:188) at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.loadLibrary(AsmLibraryLoad= er. java:110) at com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:31) at com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:25) at com.kenai.jaffl.Library.loadLibrary(Library.java:76) at org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider$SingletonHolder.( POSIXFactory.java:108) at org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider.getLibC(POSIXFactory.j= ava :112) at org.jruby.ext.posix.BaseNativePOSIX.(BaseNativePOSIX.java:30) at org.jruby.ext.posix.LinuxPOSIX.(LinuxPOSIX.java:17) at org.jruby.ext.posix.POSIXFactory.loadLinuxPOSIX(POSIXFactory.java:70) at = org.jruby.ext.posix.POSIXFactory.loadPOSIX(POSIXFactory.java:31) at org.jruby.ext.posix.LazyPOSIX.loadPOSIX(LazyPOSIX.java:29) at org.jruby.ext.posix.LazyPOSIX.posix(LazyPOSIX.java:25) at org.jruby.ext.posix.LazyPOSIX.isatty(LazyPOSIX.java:155) at org.jruby.RubyIO.tty_p(RubyIO.java:1858) at = org.jruby.RubyIO$i$0$0$tty_p.call(RubyIO$i$0$0$tty_p.gen:65535) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.j= ava :292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)= at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:63) at org.jruby.ast.IfNode.interpret(IfNode.java:111) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:7= 5) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMeth= od. java:147) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:= 163 ) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.j= ava :292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)= at org.jruby.ast.VCallNode.interpret(VCallNode.java:86) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:7= 5) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMeth= od. java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:= 171 ) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.j= ava :302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java= :14 4) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148)= at org.jruby.RubyClass.newInstance(RubyClass.java:806) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535)= at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call= (Ja vaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.j= ava :292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)= at opt.cloudera.parcels.CDH_minus_4_dot_2_dot_0_minus_1_dot_cdh4_dot_2_dot_0= _do t_p0_dot_10.lib.hbase.bin.$_dot_dot_.bin.hirb.__file__(/opt/cloudera/parc= els /CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb:113) at opt.cloudera.parcels.CDH_minus_4_dot_2_dot_0_minus_1_dot_cdh4_dot_2_dot_0= _do t_p0_dot_10.lib.hbase.bin.$_dot_dot_.bin.hirb.load(/opt/cloudera/parcels/= CDH -4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:693) at org.jruby.Ruby.runScript(Ruby.java:686) at org.jruby.Ruby.runNormally(Ruby.java:593) at org.jruby.Ruby.runFromMain(Ruby.java:442) at org.jruby.Main.doRunFromMain(Main.java:321) at org.jruby.Main.internalRun(Main.java:241) at org.jruby.Main.run(Main.java:207) at org.jruby.Main.run(Main.java:191) at org.jruby.Main.main(Main.java:171) Caused by: java.lang.UnsatisfiedLinkError: Permission denied at com.kenai.jffi.Init.loadFromJar(Init.java:151) at com.kenai.jffi.Init.load(Init.java:78) at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49) at = com.kenai.jffi.Foreign$InstanceHolder.(Foreign.java:45) at com.kenai.jffi.Foreign.getInstance(Foreign.java:95) at = com.kenai.jffi.Internals.getErrnoSaveFunction(Internals.java:44) at com.kenai.jaffl.provider.jffi.StubCompiler.getErrnoSaveFunction(StubCompi= ler .java:68) at com.kenai.jaffl.provider.jffi.StubCompiler.(StubCompiler.java:18)= at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmL= ibr aryLoader.java:146) ... 50 more Foreign.java:90:in `getForeign': java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: Permission denied from Foreign.java:95:in `getInstance' from Library.java:151:in `openLibrary' from Library.java:125:in `getCachedInstance' from Library.java:66:in `loadNativeLibraries' from Library.java:56:in `getNativeLibraries' from Library.java:35:in `getSymbolAddress' from Library.java:45:in `findSymbolAddress' from DefaultInvokerFactory.java:51:in `createInvoker' from Library.java:27:in `getInvoker' from NativeInvocationHandler.java:90:in `createInvoker' from NativeInvocationHandler.java:74:in `getInvoker' from NativeInvocationHandler.java:110:in `invoke' from null:-1:in `isatty' from BaseNativePOSIX.java:287:in `isatty' from LazyPOSIX.java:155:in `isatty' from RubyIO.java:1858:in `tty_p' from RubyIO$i$0$0$tty_p.gen:65535:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from CallNoArgNode.java:63:in `interpret' from IfNode.java:111:in `interpret' from NewlineNode.java:104:in `interpret' from ASTInterpreter.java:75:in `INTERPRET_METHOD' from InterpretedMethod.java:147:in `call' from DefaultMethod.java:163:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from VCallNode.java:86:in `interpret' from NewlineNode.java:104:in `interpret' from BlockNode.java:71:in `interpret' from ASTInterpreter.java:75:in `INTERPRET_METHOD' from InterpretedMethod.java:169:in `call' from DefaultMethod.java:171:in `call' from CachingCallSite.java:302:in `cacheAndCall' from CachingCallSite.java:144:in `callBlock' from CachingCallSite.java:148:in `call' from RubyClass.java:806:in `newInstance' from RubyClass$i$newInstance.gen:65535:in `call' from JavaMethod.java:249:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hir= b.r b:113:in `__file__' from /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hir= b.r b:-1:in `load' from Ruby.java:693:in `runScript' from Ruby.java:686:in `runScript' from Ruby.java:593:in `runNormally' from Ruby.java:442:in `runFromMain' from Main.java:321:in `doRunFromMain' from Main.java:241:in `internalRun' from Main.java:207:in `run' from Main.java:191:in `run' from Main.java:171:in `main' Caused by: Init.java:151:in `loadFromJar': java.lang.UnsatisfiedLinkError: = Permission denied from Init.java:78:in `load' from Foreign.java:49:in `getInstanceHolder' from Foreign.java:45:in `' from Foreign.java:95:in `getInstance' from Internals.java:44:in `getErrnoSaveFunction' from StubCompiler.java:68:in `getErrnoSaveFunction' from StubCompiler.java:18:in `' from AsmLibraryLoader.java:146:in `generateInterfaceImpl' from AsmLibraryLoader.java:110:in `loadLibrary' from Provider.java:31:in `loadLibrary' from Provider.java:25:in `loadLibrary' from Library.java:76:in `loadLibrary' from POSIXFactory.java:108:in `' from POSIXFactory.java:112:in `getLibC' from BaseNativePOSIX.java:30:in `' from LinuxPOSIX.java:17:in `' from POSIXFactory.java:70:in `loadLinuxPOSIX' from POSIXFactory.java:31:in `loadPOSIX' from LazyPOSIX.java:29:in `loadPOSIX' from LazyPOSIX.java:25:in `posix' from LazyPOSIX.java:155:in `isatty' from RubyIO.java:1858:in `tty_p' from RubyIO$i$0$0$tty_p.gen:65535:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from CallNoArgNode.java:63:in `interpret' from IfNode.java:111:in `interpret' from NewlineNode.java:104:in `interpret' from ASTInterpreter.java:75:in `INTERPRET_METHOD' from InterpretedMethod.java:147:in `call' from DefaultMethod.java:163:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from VCallNode.java:86:in `interpret' from NewlineNode.java:104:in `interpret' from BlockNode.java:71:in `interpret' from ASTInterpreter.java:75:in `INTERPRET_METHOD' from InterpretedMethod.java:169:in `call' from DefaultMethod.java:171:in `call' from CachingCallSite.java:302:in `cacheAndCall' from CachingCallSite.java:144:in `callBlock' from CachingCallSite.java:148:in `call' from RubyClass.java:806:in `newInstance' from RubyClass$i$newInstance.gen:65535:in `call' from JavaMethod.java:249:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hir= b.r b:113:in `__file__' from /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../bin/hir= b.r b:-1:in `load' from Ruby.java:693:in `runScript' from Ruby.java:686:in `runScript' from Ruby.java:593:in `runNormally' from Ruby.java:442:in `runFromMain' from Main.java:321:in `doRunFromMain' from Main.java:241:in `internalRun' from Main.java:207:in `run' from Main.java:191:in `run' from Main.java:171:in `main' On 2 April 2013 17:55, Praveen Bysani wrote: > Hi, > > I tried deleting /tmp, but the errors still present in the log. After=20 > deleting /tmp i am having a permission denied error when running pig=20 > scripts :/ I had to sudo everytime now, eventhough i changed=20 > permissions of /tmp/hadoop-username. Any ideas ? > > > On 2 April 2013 14:52, Vibhav Mundra wrote: > >> Just stop the hbase, and the hadoop. >> Then delete the contents of /tmp folder on all the nodes {=20 >> master/slaves/regionerservers. } >> >> Try retsrating and u should be good to go. >> >> -Vibhav >> >> >> On Tue, Apr 2, 2013 at 11:03 AM, Praveen Bysani=20 >> > >wrote: >> >> > Hi, >> > >> > I have setup hbase using cloudera, the version it shows 'HBase=20 >> > 0.94.2-cdh4.2.0'. In this case both server and regionserver are the = >> > same machine. But during other instances logs show the hostname of=20 >> > another regionserver with similar errors. Just a note that the=20 >> > master has >> different >> > system time from the regionservers, is it an issue ? >> > >> > I didn't find any errors while starting the service but there is an >> error >> > while shutting down the service. Following is the log from the=20 >> > region server on the same machine during shutdown, >> > >> > 2013-04-02 03:36:54,390 INFO >> > >> org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager= : >> > Stopping RegionServerSnapshotManager gracefully. >> > 2013-04-02 03:36:54,396 INFO >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2=20 >> > regions to close >> > 2013-04-02 03:36:54,397 INFO >> > org.apache.hadoop.hbase.regionserver.Store: Closed info >> > 2013-04-02 03:36:54,398 INFO >> > org.apache.hadoop.hbase.regionserver.HRegion: Closed >> > -ROOT-,,0.70236052 >> > 2013-04-02 03:36:54,426 INFO >> > org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom = >> > filter type for hdfs:// >> > >> server.epicoders.com:8020/hbase/.META./1028785192/.tmp/1104b09fbaeb41 >> 829c2493875d7475c1 >> > : >> > CompoundBloomFilterWriter >> > 2013-04-02 03:36:54,558 INFO >> > org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom=20 >> > and NO DeleteFamily was added to HFile (hdfs:// >> > >> server.epicoders.com:8020/hbase/.META./1028785192/.tmp/1104b09fbaeb41 >> 829c2493875d7475c1 >> > ) >> > 2013-04-02 03:36:54,558 INFO >> > org.apache.hadoop.hbase.regionserver.Store: Flushed ,=20 >> > sequenceid=3D90407, memsize=3D2.3k, into tmp file hdfs:// >> > >> server.epicoders.com:8020/hbase/.META./1028785192/.tmp/1104b09fbaeb41 >> 829c2493875d7475c1 >> > 2013-04-02 03:36:54,612 INFO >> > org.apache.hadoop.hbase.regionserver.Store: Added hdfs:// >> > >> server.epicoders.com:8020/hbase/.META./1028785192/info/1104b09fbaeb41 >> 829c2493875d7475c1 >> > , >> > entries=3D8, sequenceid=3D90407, filesize=3D1.6k >> > 2013-04-02 03:36:54,669 INFO >> > org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore=20 >> > flush of ~2.3k/2320, currentsize=3D0/0 for region=20 >> > .META.,,1.1028785192 in 273ms, sequenceid=3D90407, compaction=20 >> > requested=3Dfalse >> > 2013-04-02 03:36:54,676 INFO >> > org.apache.hadoop.hbase.regionserver.Store: Closed info >> > 2013-04-02 03:36:54,676 INFO >> > org.apache.hadoop.hbase.regionserver.HRegion: Closed >> > .META.,,1.1028785192 >> > 2013-04-02 03:36:54,799 INFO >> > org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server = >> > server.epicoders.com,60020,1364559783898; all regions closed. >> > 2013-04-02 03:36:54,800 INFO >> > org.apache.hadoop.hbase.regionserver.wal.HLog: >> > regionserver60020.logSyncer exiting >> > 2013-04-02 03:36:54,861 INFO >> > org.apache.hadoop.hbase.regionserver.Leases: regionserver60020=20 >> > closing leases >> > 2013-04-02 03:36:54,862 INFO >> > org.apache.hadoop.hbase.regionserver.Leases: regionserver60020=20 >> > closed leases >> > 2013-04-02 03:36:54,864 WARN >> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly=20 >> > transient ZooKeeper exception: >> > org.apache.zookeeper.KeeperException$SessionExpiredException: >> > KeeperErrorCode =3D Session expired for >> > /hbase/rs/server.epicoders.com,60020,1364559783898 >> > 2013-04-02 03:36:54,864 INFO >> > org.apache.hadoop.hbase.util.RetryCounter: Sleeping 2000ms before=20 >> > retry #1... >> > 2013-04-02 03:36:56,868 WARN >> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly=20 >> > transient ZooKeeper exception: >> > org.apache.zookeeper.KeeperException$SessionExpiredException: >> > KeeperErrorCode =3D Session expired for >> > /hbase/rs/server.epicoders.com,60020,1364559783898 >> > 2013-04-02 03:36:56,871 INFO >> > org.apache.hadoop.hbase.util.RetryCounter: Sleeping 4000ms before=20 >> > retry #2... >> > 2013-04-02 03:36:59,051 INFO >> > org.apache.hadoop.hbase.regionserver.Leases: >> > regionserver60020.leaseChecker closing leases >> > 2013-04-02 03:36:59,054 INFO >> > org.apache.hadoop.hbase.regionserver.Leases: >> > regionserver60020.leaseChecker closed leases >> > 2013-04-02 03:37:00,872 WARN >> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly=20 >> > transient ZooKeeper exception: >> > org.apache.zookeeper.KeeperException$SessionExpiredException: >> > KeeperErrorCode =3D Session expired for >> > /hbase/rs/server.epicoders.com,60020,1364559783898 >> > 2013-04-02 03:37:00,873 INFO >> > org.apache.hadoop.hbase.util.RetryCounter: Sleeping 8000ms before=20 >> > retry #3... >> > 2013-04-02 03:37:08,875 WARN >> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly=20 >> > transient ZooKeeper exception: >> > org.apache.zookeeper.KeeperException$SessionExpiredException: >> > KeeperErrorCode =3D Session expired for >> > /hbase/rs/server.epicoders.com,60020,1364559783898 >> > 2013-04-02 03:37:08,876 ERROR >> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper=20 >> > delete failed after 3 retries >> > 2013-04-02 03:37:08,876 WARN >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting = >> > my ephemeral node >> > org.apache.zookeeper.KeeperException$SessionExpiredException: >> > KeeperErrorCode =3D Session expired for >> > /hbase/rs/server.epicoders.com,60020,1364559783898 >> > at >> > = org.apache.zookeeper.KeeperException.create(KeeperException.java:127) >> > at >> > = org.apache.zookeeper.KeeperException.create(KeeperException.java:51) >> > at = org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873) >> > at >> > >> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(Recover >> ableZooKeeper.java:137) >> > at >> > = org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1215) >> > at >> > = org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1204) >> > at >> > >> org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralN >> ode(HRegionServer.java:1068) >> > at >> > >> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer. >> java:846) >> > at java.lang.Thread.run(Thread.java:662) >> > 2013-04-02 03:37:08,876 INFO >> > org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server = >> > server.epicoders.com,60020,1364559783898; zookeeper connection = closed. >> > 2013-04-02 03:37:08,877 INFO >> > org.apache.hadoop.hbase.regionserver.HRegionServer:=20 >> > regionserver60020 exiting >> > 2013-04-02 03:37:08,879 INFO >> > org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook=20 >> > starting; hbase.shutdown.hook=3Dtrue; >> > >> > >> = fsShutdownHook=3Dorg.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@ >> 7a2431b9 >> > 2013-04-02 03:37:08,879 INFO >> > org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:=20 >> > Shutdown hook >> > 2013-04-02 03:37:08,879 INFO >> > org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs=20 >> > shutdown hook thread. >> > 2013-04-02 03:37:08,879 INFO >> > org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook=20 >> > finished. >> > >> > >> > >> > >> > On 2 April 2013 12:31, Ted Yu wrote: >> > >> > > Have you checked region server log on server.epicoders.com >> > > ,60020,1364559783898 >> > > around the time NotServingRegionException was seen in master log = ? >> > > >> > > What version of HBase are you using ? >> > > >> > > Thanks >> > > >> > > >> > > On Mon, Apr 1, 2013 at 9:20 PM, Praveen Bysani < >> praveen.iiith@gmail.com >> > > >wrote: >> > > >> > > > Hi, >> > > > >> > > > When i try to restart the HBase service i see the following=20 >> > > > errors >> in >> > > > my Hbase Master log, >> > > > >> > > > >> > > > 2013-04-02 03:37:29,713 INFO >> > > > org.apache.hadoop.hbase.master.metrics.MasterMetrics:=20 >> > > > Initialized >> > > > 2013-04-02 03:37:29,797 INFO >> > > > org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting=20 >> > > > ZNode >> for >> > > > /hbase/backup-masters/server.epicoders.com,60000,1364873849167=20 >> > > > from backup master directory >> > > > 2013-04-02 03:37:29,816 WARN >> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node >> > > > /hbase/backup-masters/server.epicoders.com,60000,1364873849167 >> already >> > > > deleted, and this is not a retry >> > > > 2013-04-02 03:37:29,816 INFO >> > > > org.apache.hadoop.hbase.master.ActiveMasterManager: >> > > > Master=3Dserver.epicoders.com,60000,1364873849167 >> > > > 2013-04-02 03:37:31,830 WARN = org.apache.hadoop.conf.Configuration: >> > > > fs.default.name is deprecated. Instead, use fs.defaultFS >> > > > 2013-04-02 03:37:31,848 INFO >> > > > org.apache.hadoop.hbase.master.SplitLogManager: found 0 orphan=20 >> > > > tasks and 0 rescan nodes >> > > > 2013-04-02 03:37:32,349 WARN = org.apache.hadoop.conf.Configuration: >> > > > hadoop.native.lib is deprecated. Instead, use >> io.native.lib.available >> > > > 2013-04-02 03:37:32,774 INFO org.apache.hadoop.hbase.master.HMaster: >> > > > Server active/primary master; >> > > > server.epicoders.com,60000,1364873849167, >> sessionid=3D0x13daf9ed2b90086, >> > > > cluster-up flag was=3Dfalse >> > > > 2013-04-02 03:37:32,817 INFO >> > > > org.apache.hadoop.hbase.master.snapshot.SnapshotManager:=20 >> > > > Snapshot feature is not enabled, missing log and hfile = cleaners. >> > > > 2013-04-02 03:37:32,846 INFO >> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node=20 >> > > > /hbase/online-snapshot/acquired already exists and this is not=20 >> > > > a >> retry >> > > > 2013-04-02 03:37:32,856 INFO >> > > > org.apache.hadoop.hbase.procedure.ZKProcedureUtil: Clearing all = >> > > > procedure znodes: /hbase/online-snapshot/acquired=20 >> > > > /hbase/online-snapshot/reached /hbase/online-snapshot/abort >> > > > 2013-04-02 03:37:33,095 INFO org.mortbay.log: Logging to >> > > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via=20 >> > > > org.mortbay.log.Slf4jLog >> > > > 2013-04-02 03:37:33,175 INFO org.apache.hadoop.http.HttpServer: >> Added >> > > > global filter 'safety' >> > > > (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter) >> > > > 2013-04-02 03:37:33,178 INFO org.apache.hadoop.http.HttpServer: >> Added >> > > > filter static_user_filter >> > > > >> = (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilte >> r) >> > > > to context master >> > > > 2013-04-02 03:37:33,178 INFO org.apache.hadoop.http.HttpServer: >> Added >> > > > filter static_user_filter >> > > > >> = (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilte >> r) >> > > > to context static >> > > > 2013-04-02 03:37:33,200 INFO org.apache.hadoop.http.HttpServer: >> Jetty >> > > > bound to port 60010 >> > > > 2013-04-02 03:37:33,200 INFO org.mortbay.log: >> jetty-6.1.26.cloudera.2 >> > > > 2013-04-02 03:37:33,880 INFO org.mortbay.log: Started >> > > > SelectChannelConnector@0.0.0.0:60010 >> > > > 2013-04-02 03:37:33,881 INFO >> > > > org.apache.hadoop.hbase.master.ServerManager: Waiting for=20 >> > > > region servers count to settle; currently checked in 0, slept=20 >> > > > for 0 ms, expecting minimum of 1, maximum of 2147483647,=20 >> > > > timeout of 4500 ms, interval of 1500 ms. >> > > > 2013-04-02 03:37:34,079 INFO >> > > > org.apache.hadoop.hbase.master.ServerManager: Registering >> > > > server=3Dtest3.jayeson.com.sg,60020,1364873839936 >> > > > 2013-04-02 03:37:34,084 INFO >> > > > org.apache.hadoop.hbase.master.ServerManager: Registering >> > > > server=3Dtest2.jayeson.com.sg,60020,1364873841105 >> > > > 2013-04-02 03:37:34,085 INFO >> > > > org.apache.hadoop.hbase.master.ServerManager: Registering >> > > > server=3Dserver.epicoders.com,60020,1364873849637 >> > > > 2013-04-02 03:37:34,091 INFO >> > > > org.apache.hadoop.hbase.master.ServerManager: Waiting for=20 >> > > > region servers count to settle; currently checked in 3, slept=20 >> > > > for 210 ms, expecting minimum of 1, maximum of 2147483647,=20 >> > > > timeout of 4500 ms, interval of 1500 ms. >> > > > 2013-04-02 03:37:34,103 WARN = org.apache.hadoop.conf.Configuration: >> > > > fs.default.name is deprecated. Instead, use fs.defaultFS >> > > > 2013-04-02 03:37:35,634 INFO >> > > > org.apache.hadoop.hbase.master.ServerManager: Finished waiting=20 >> > > > for region servers count to settle; checked in 3, slept for=20 >> > > > 1752 ms, expecting minimum of 1, maximum of 2147483647, master = is running. >> > > > 2013-04-02 03:37:35,639 INFO >> > > > org.apache.hadoop.hbase.master.MasterFileSystem: Log folder=20 >> > > > hdfs:// >> > > > >> > > >> > >> server.epicoders.com:8020/hbase/.logs/server.epicoders.com,60020,1364 >> 873849637 >> > > > belongs to an existing region server >> > > > 2013-04-02 03:37:35,639 INFO >> > > > org.apache.hadoop.hbase.master.MasterFileSystem: Log folder=20 >> > > > hdfs:// >> > > > >> > > >> > >> server.epicoders.com:8020/hbase/.logs/test2.jayeson.com.sg,60020,1364 >> 873841105 >> > > > belongs to an existing region server >> > > > 2013-04-02 03:37:35,640 INFO >> > > > org.apache.hadoop.hbase.master.MasterFileSystem: Log folder=20 >> > > > hdfs:// >> > > > >> > > >> > >> server.epicoders.com:8020/hbase/.logs/test3.jayeson.com.sg,60020,1364 >> 873839936 >> > > > belongs to an existing region server >> > > > 2013-04-02 03:37:35,640 INFO >> > > > org.apache.hadoop.hbase.master.MasterFileSystem: No logs to=20 >> > > > split >> > > > 2013-04-02 03:37:35,669 INFO org.apache.zookeeper.ZooKeeper: >> > > > Initiating client connection, connectString=3D >> server.epicoders.com:2181 >> > > > sessionTimeout=3D60000 watcher=3Dhconnection >> > > > 2013-04-02 03:37:35,671 INFO org.apache.zookeeper.ClientCnxn: >> Opening >> > > > socket connection to server = server.epicoders.com/:2181. >> > > > Will not attempt to authenticate using SASL (Unable to locate a >> login >> > > > configuration) >> > > > 2013-04-02 03:37:35,672 INFO org.apache.zookeeper.ClientCnxn:=20 >> > > > Socket connection established to=20 >> > > > server.epicoders.com/:2181, >> > > > initiating session >> > > > 2013-04-02 03:37:35,673 INFO >> > > > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The >> identifier >> > > > of this process is 1043@server.epicoders.com >> > > > 2013-04-02 03:37:35,676 INFO org.apache.zookeeper.ClientCnxn: >> Session >> > > > establishment complete on server=20 >> > > > server.epicoders.com/:2181, sessionid =3D >> 0x13daf9ed2b90088, >> > > > negotiated timeout =3D 60000 >> > > > 2013-04-02 03:37:35,730 INFO >> > > > org.apache.hadoop.hbase.catalog.CatalogTracker: Failed=20 >> > > > verification >> of >> > > > -ROOT-,,0 at = address=3Dserver.epicoders.com,60020,1364559783898; >> > > > org.apache.hadoop.hbase.NotServingRegionException: >> > > > org.apache.hadoop.hbase.NotServingRegionException: Region is=20 >> > > > not >> > > > online: -ROOT-,,0 >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionS >> erver.java:3211) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionInfo(HReg >> ionServer.java:1996) >> > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >> Method) >> > > > at >> > > > >> > > >> > >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. >> java:39) >> > > > at >> > > > >> > > >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces >> sorImpl.java:25) >> > > > at java.lang.reflect.Method.invoke(Method.java:597) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpc >> Engine.java:364) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java: >> 1400) >> > > > >> > > > 2013-04-02 03:37:35,733 INFO >> > > > org.apache.hadoop.hbase.catalog.RootLocationEditor: Unsetting=20 >> > > > ROOT region location in ZooKeeper >> > > > 2013-04-02 03:37:36,654 INFO >> > > > org.apache.hadoop.hbase.master.handler.OpenedRegionHandler:=20 >> > > > Handling OPENED event for -ROOT-,,0.70236052 from=20 >> > > > test3.jayeson.com.sg,60020,1364873839936; deleting unassigned=20 >> > > > node >> > > > 2013-04-02 03:37:36,663 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: The master=20 >> > > > has opened the region -ROOT-,,0.70236052 that was online on >> > > > test3.jayeson.com.sg,60020,1364873839936 >> > > > 2013-04-02 03:37:36,683 INFO org.apache.hadoop.hbase.master.HMaster: >> > > > -ROOT- assigned=3D1, rit=3Dfalse, >> > > > location=3Dtest3.jayeson.com.sg,60020,1364873839936 >> > > > 2013-04-02 03:37:36,797 INFO >> > > > org.apache.hadoop.hbase.catalog.CatalogTracker: Failed=20 >> > > > verification >> of >> > > > .META.,,1 at = address=3Dserver.epicoders.com,60020,1364559783898; >> > > > org.apache.hadoop.hbase.NotServingRegionException: >> > > > org.apache.hadoop.hbase.NotServingRegionException: Region is=20 >> > > > not >> > > > online: .META.,,1 >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionS >> erver.java:3211) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionInfo(HReg >> ionServer.java:1996) >> > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >> Method) >> > > > at >> > > > >> > > >> > >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. >> java:39) >> > > > at >> > > > >> > > >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces >> sorImpl.java:25) >> > > > at java.lang.reflect.Method.invoke(Method.java:597) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpc >> Engine.java:364) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java: >> 1400) >> > > > >> > > > >> > > > After continuing with same error for a while the log ends with, >> > > > >> > > > 2013-04-02 03:37:39,281 INFO >> > > > org.apache.hadoop.hbase.master.handler.OpenedRegionHandler:=20 >> > > > Handling OPENED event for .META.,,1.1028785192 from=20 >> > > > test2.jayeson.com.sg,60020,1364873841105; deleting unassigned=20 >> > > > node >> > > > 2013-04-02 03:37:39,328 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: The master=20 >> > > > has opened the region .META.,,1.1028785192 that was online on >> > > > test2.jayeson.com.sg,60020,1364873841105 >> > > > 2013-04-02 03:37:39,335 INFO org.apache.hadoop.hbase.master.HMaster: >> > > > .META. assigned=3D2, rit=3Dfalse, >> > > > location=3Dtest2.jayeson.com.sg,60020,1364873841105 >> > > > 2013-04-02 03:37:39,359 INFO >> > > > org.apache.hadoop.hbase.catalog.MetaMigrationRemovingHTD: Meta=20 >> > > > version=3D0; migrated=3Dtrue >> > > > 2013-04-02 03:37:39,359 INFO >> > > > org.apache.hadoop.hbase.catalog.MetaMigrationRemovingHTD:=20 >> > > > ROOT/Meta already up-to date with new HRI. >> > > > 2013-04-02 03:37:39,472 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: Clean cluster = >> > > > startup. Assigning userregions >> > > > 2013-04-02 03:37:39,535 INFO >> > > > org.apache.hadoop.hbase.master.LoadBalancer: Reassigned 2=20 >> > > > regions. 2 retained the pre-restart assignment. >> > > > 2013-04-02 03:37:39,535 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: Bulk=20 >> > > > assigning 2 >> > > > region(s) across 3 server(s), retainAssignment=3Dtrue >> > > > 2013-04-02 03:37:39,541 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: Bulk=20 >> > > > assigning >> done >> > > > 2013-04-02 03:37:39,562 INFO org.apache.hadoop.hbase.master.HMaster: >> > > > Registered HMaster MXBean >> > > > 2013-04-02 03:37:39,562 INFO org.apache.hadoop.hbase.master.HMaster: >> > > > Master has completed initialization >> > > > 2013-04-02 03:37:39,577 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: >> > > > test2.jayeson.com.sg,60020,1364873841105 unassigned znodes=3D1 = of >> > > > total=3D1 >> > > > 2013-04-02 03:37:39,578 INFO >> > > > org.apache.hadoop.hbase.master.AssignmentManager: >> > > > test3.jayeson.com.sg,60020,1364873839936 unassigned znodes=3D1 = of >> > > > total=3D1 >> > > > >> > > > Finally i could see the service is running fine and can access=20 >> > > > and >> > write >> > > > data into hbase from shell. However i want to know if these=20 >> > > > errors >> mean >> > > > anything or affect any behavior of hbase ? >> > > > >> > > > -- >> > > > Regards, >> > > > Praveen Bysani >> > > > http://www.praveenbysani.com >> > > > >> > > >> > >> > >> > >> > -- >> > Regards, >> > Praveen Bysani >> > http://www.praveenbysani.com >> > >> > > > > -- > Regards, > Praveen Bysani > http://www.praveenbysani.com > -- Regards, Praveen Bysani http://www.praveenbysani.com