hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: latest stable hbase-0.94.13 cannot start master: java.lang.RuntimeException: Failed suppression of fs shutdown hook
Date Tue, 11 Mar 2014 20:13:56 GMT
Resurrecting this old thread. The following error:

"java.lang.RuntimeException: Failed suppression of fs shutdown hook"

Is caused when HBase is compiled against Hadoop 1 and has Hadoop 2 jars on
its classpath. Someone on IRC just had the same issue and I was able to
repro after seeing the classpath.

J-D


On Wed, Nov 13, 2013 at 7:00 AM, Ted Yu <yuzhihong@gmail.com> wrote:

> Your hbase.rootdir config parameter points to file: instead of hdfs:
>
> Where is hadoop-2.2.0 running ?
>
> You also need to build tar ball using hadoop 2 profile. See the following
> in pom.xml:
>
>       profile for building against Hadoop 2.0.0-alpha. Activate using:
>        mvn -Dhadoop.profile=2.0
>     -->
>     <profile>
>       <id>hadoop-2.0</id>
>
>
> On Wed, Nov 13, 2013 at 6:13 AM, <Jason_Vasdias@mcafee.com> wrote:
>
> > Good day -
> >
> > I'm an hadoop & hbase newbie, so please excuse me if this is a known
> issue
> > - hoping someone might send me a simple fix !
> >
> > I installed the latest stable tarball : hbase-0.94.13.tar.gz , and
> > followed the instructions at
> > docs/book/quickstart.html .
> > (After installing hadoop-2.2.0, and running the resourcemanager &
> > nodemanager, which are both running and presenting
> > web-pages at the configured ports OK).
> >
> > My hbase-site.xml now looks like:
> >
> > <configuration>
> >
> >   <property>
> >     <name>hbase.rootdir</name>
> >     <value>file:///home/jason/3P/hbase/data</value>
> >   </property>
> >
> >   <property>
> >     <name>hbase.zookeeper.property.dataDir</name>
> >     <value>/home/jason/3P/hbase/zookeeper-data</value>
> >   </property>
> >
> > </configuration>
> >
> > I try to start hbase as instructed in the QuickStart guide:
> >    $ bin/hbase-start.sh
> >    starting master, logging to
> > /home/jason/3P/hbase-0.94.13/logs/hbase-jason-master-jvds.out
> >
> > But the master does NOT start .
> > I think it is a bug that the hbase-start.sh script does not complain that
> > hbase failed to start.
> > Shall I raise a JIRA issue on this ?
> >
> > Anyway, when I look in the logs/hbase-jason-master-jvds.log file, I see
> > that a Java exception occurred :
> >
> > 2013-11-13 13:52:06,316 INFO
> > org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
> > /hbase/backup-masters/jvds,52926,1384350725521 from backup master
> directory
> > 2013-11-13 13:52:06,318 INFO
> > org.apache.zookeeper.server.PrepRequestProcessor: Got user-level
> > KeeperException when processing sessionid:0x14251bbb3d40000 type:delete
> > cxid:0x13 zxid:0xb txntype:-1 reqpath:n/a Error
> > Path:/hbase/backup-masters/jvds,52926,1384350725521
> Error:KeeperErrorCode =
> > NoNode for /hbase/backup-masters/jvds,52926,1384350725521
> > 2013-11-13 13:52:06,320 WARN
> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> > /hbase/backup-masters/jvds,52926,1384350725521 already deleted, and this
> is
> > not a retry
> > 2013-11-13 13:52:06,320 INFO
> > org.apache.hadoop.hbase.master.ActiveMasterManager:
> > Master=jvds,52926,1384350725521
> > 2013-11-13 13:52:06,348 INFO
> > org.apache.hadoop.hbase.master.SplitLogManager: timeout = 300000
> > 2013-11-13 13:52:06,348 INFO
> > org.apache.hadoop.hbase.master.SplitLogManager: unassigned timeout =
> 180000
> > 2013-11-13 13:52:06,348 INFO
> > org.apache.hadoop.hbase.master.SplitLogManager: resubmit threshold = 3
> > 2013-11-13 13:52:06,352 INFO
> > org.apache.hadoop.hbase.master.SplitLogManager: found 0 orphan tasks and
> 0
> > rescan nodes
> > 2013-11-13 13:52:06,385 INFO org.apache.hadoop.util.NativeCodeLoader:
> > Loaded the native-hadoop library
> > 2013-11-13 13:52:06,385 ERROR
> > org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
> > java.lang.RuntimeException: Failed suppression of fs shutdown hook:
> > Thread[Thread-27,5,main]
> >         at
> >
> org.apache.hadoop.hbase.regionserver.ShutdownHook.suppressHdfsShutdownHook(ShutdownHook.java:196)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.ShutdownHook.install(ShutdownHook.java:83)
> >         at
> >
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:191)
> >         at
> >
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420)
> >         at
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:149)
> >         at
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >         at
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
> >         at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2120)
> > 2013-11-13 13:52:06,386 ERROR org.apache.hadoop.io.nativeio.NativeIO:
> > Unable to initialize NativeIO libraries
> > java.lang.NoSuchFieldError: workaroundNonThreadSafePasswdCalls
> >         at org.apache.hadoop.io.nativeio.NativeIO.initNative(Native
> Method)
> >         at
> > org.apache.hadoop.io.nativeio.NativeIO.<clinit>(NativeIO.java:58)
> >         at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:653)
> >         at
> >
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
> >         at
> >
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:286)
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:385)
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:364)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:435)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
> >         at
> > org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:375)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:436)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:148)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:133)
> >         at
> >
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:573)
> >         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432)
> >         at
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:226)
> >         at java.lang.Thread.run(Thread.java:744)
> > 2013-11-13 13:52:06,388 DEBUG org.apache.hadoop.hbase.util.FSUtils:
> > Created version file at file:/home/jason/3P/hbase/data set its version
> at:7
> >
> > Any ideas how I can prevent these errors and start HBASE ?
> >
> > Thanks & Regards,
> > Jason
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message