hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: HMaster daemon is not starting up
Date Sat, 23 Nov 2013 03:02:05 GMT
For your first question, please take a look at
http://hbase.apache.org/book.html#zookeeper


On Sat, Nov 23, 2013 at 10:11 AM, Bharat Shetty <bharat.shetty@gmail.com>wrote:

> Yeah, error is related to HDFS.
>
> I'm using HBase 0.96.0 with Hadoop 2.1.0-beta.
>
> On more further digging, it appears that the /tmp directory on the master
> node had some problems. Zookeeper for deployment is being managed
> internally by the HBase, appears to store data in /tmp directory
> (/tmp/hbase-iouser/zookeeper/). Since this region of the filesystem was
> unstable owing filesystem problems in the /tmp folder of the master node,
> zookeeper communication btw Master and slaves failed and as such HMaster
> seems to have shut down. I was able to get up the master running on other
> node with same configurations used previously.
>
> There was no changes to the lib directory of the HBase during deployment.
> The setup was working fine and I was able to run map-reduce programs for
> importing, exporting and filtering on millions of records in HBase prior to
> HMaster failure.
>
> In a production level scenario, which is ideal ? Zookeeper managed
> ourselves as opposed to zookeeper managed internally by the HBase ?
>
> Also, are there any documentation or links anywhere for production level
> configurations for HBase running on top of the HDFS (Hadoop) ?
>
> Best,
> Bharat
>
> -- B
>
>
> On Fri, Nov 22, 2013 at 1:17 PM, Ted Yu <yuzhihong@gmail.com> wrote:
>
> > The error seemed to be related to hdfs.
> >
> > What version of HBase / hadoop are you using ?
> > Was there any change in the lib directory of HBase depployment ?
> >
> > Cheers
> >
> >
> > On Fri, Nov 22, 2013 at 2:50 PM, Bharat Shetty <bharat.shetty@gmail.com
> > >wrote:
> >
> > > Hi all,
> > >
> > > I've been running into an error of late whose root cause I'm not able
> to
> > > decipher. Before this I had been able to run HBase on top of HDFS
> without
> > > any issues. Suddenly HMaster shut down one day and when I try to
> restart
> > > I'm unable to start the HMaster daemon.
> > >
> > > Could you please guide if there is something that I might be missing.
> > >
> > > From the logs:
> > > vim hbase-iouser-master-naples.log
> > >
> > >
> > > 2013-11-22 09:50:27,096 ERROR [main] master.HMasterCommandLine: Master
> > > exiting
> > > java.lang.RuntimeException: Failed construction of Master: class
> > > org.apache.hadoop.hbase.master.HMaster
> > >         at
> > >
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2773)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
> > >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> > >         at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2787)
> > > Caused by: java.lang.UnsupportedOperationException: Not implemented by
> > the
> > > DistributedFileSystem FileSystem implementation
> > >         at
> org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
> > >         at
> > >
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
> > >         at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> > >         at
> > > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> > >         at
> > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> > >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:275)
> > >         at
> > > org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:884)
> > >         at
> > org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:455)
> > >         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > Method)
> > >         at
> > >
> > >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > >         at
> > >
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >         at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:532)
> > >         at
> > >
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2768)
> > >
> > > From: hbase-iouser-regionserver-naples.log
> > >
> > > 2013-11-22 09:50:28,150 ERROR [main]
> > regionserver.HRegionServerCommandLine:
> > > Region server exiting
> > > java.lang.UnsupportedOperationException: Not implemented by the
> > > DistributedFileSystem FileSystem implementation
> > >         at
> org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:209)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2397)
> > >         at
> > >
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2407)
> > >         at
> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2424)
> > >         at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> > >         at
> > > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> > >         at
> > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:165)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2276)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2260)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:62)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
> > >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2311)
> > > ~
> > > Regards,
> > >
> > > - Bharat
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message