hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: Seeing errors after loading a fair amount of data. KeeperException$NoNodeException, IOException
Date Thu, 07 Jan 2010 01:41:28 GMT
Have you looked at the Troubleshooting page up on the wiki:

    http://wiki.apache.org/hadoop/Hbase/Troubleshooting

?

Can you confirm that, like Ryan says, you have taken such steps as upping DFS DataNode xceiver
limits, the OS file handle limit, etc.?



----- Original Message ----
> From: Ryan Rawson <ryanobjc@gmail.com>
> To: hbase-user@hadoop.apache.org
> Sent: Tue, January 5, 2010 4:19:49 PM
> Subject: Re: Seeing errors after loading a fair amount of data.  KeeperException$NoNodeException,
IOException
> 
> Hey,
> 
> What we need is the fatal exception in the Regionserver log... just
> looking though I suspect you might be running into HDFS tuning limits.
> Xciever count and ulimit -n are the key settings you want to verify.
> 
> Look in:
> http://hadoop.apache.org/hbase/docs/current/api/overview-summary.html#overview_description
> 
> Let us know!
> -ryan
> 
> On Tue, Jan 5, 2010 at 4:06 PM, Marc Limotte wrote:
> > I'm struggling with a problem that seems to manifest only after I load a
> > fair amount of data.  I run a few map/reduce jobs and load about 80M
> > rows successfully. Then I start another process to load another 35M or so
> > rows, and things start breaking:
> >
> > - Most of the RegionServer processes die (4 out of 5) -- log message below.
> > - HMaster does not die, but seems unresponsive at the status web page (port
> > 60030) -- log message below
> > - HQuorumPeer(s) are still running
> >
> >
> > I restart the entire cluster (full reboot) and try again, but the problem
> > occurs again immediately (similar process state and log messages).
> >
> > It seems that if I truncate the table and restart, I get a similar
> > situation.  *I.e. I can load about 80M rows, but then the RegionServers die
> > and my jobs fail. *
> >
> > Small cluster: 5 nodes
> > 2 x 2 cores, 8gig memory, Fedora 8
> > Hbase 0.20.2 / Hadoop 0.20.2
> >
> > ---------
> >
> > *Master log contains errors like:*
> >
> > 2010-01-05 17:33:05,142 INFO org.apache.hadoop.hbase.master.ServerManager:
> > slave2.cluster1,60020,1262730096922 znode expired
> > 2010-01-05 17:33:05,147 INFO
> > org.apache.hadoop.hbase.master.RegionServerOperation: process shutdown of
> > server slave2.cluster1,60020,1262730096922: logSplit: false, rootRescanned:
> > false, numberOfMetaRegions: 1, onlineMetaRegions.size(): 1
> > 2010-01-05 17:33:05,156 INFO org.apache.hadoop.hbase.regionserver.HLog:
> > Splitting 9 hlog(s) in
> > hdfs://master.cluster1:9000/hbase/.logs/slave2.cluster1,60020,1262730096922
> > 2010-01-05 17:33:12,303 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
> > Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /hbase/feed/248946612/oldlogfile.log could only be replicated to 0 nodes,
> > instead of 1
> >        at
> > 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
> >        at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >        at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
> >        at
> > 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:396)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >        at org.apache.hadoop.ipc.Client.call(Client.java:739)
> >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >        at $Proxy0.addBlock(Unknown Source)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at
> > 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at
> > 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >        at
> > 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >        at $Proxy0.addBlock(Unknown Source)
> >        at
> > 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2906)
> >        at
> > 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2788)
> >        at
> > 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2078)
> >        at
> > 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2264)
> >
> > 2010-01-05 17:33:12,303 WARN org.apache.hadoop.hdfs.DFSClient: Error
> > Recovery for block null bad datanode[0] nodes == null
> > 2010-01-05 17:33:12,303 WARN org.apache.hadoop.hdfs.DFSClient: Could not get
> > block locations. Source file "/hbase/feed/248946612/oldlogfile.log" -
> > Aborting...
> >
> >
> > *And the region server logs contain this right after start up:*
> >
> > java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException:
> > KeeperErrorCode = NoNode for /hbase/root-region-server
> > at
> > 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.readAddressOrThrow(ZooKeeperWrapper.java:332)
> >  at
> > 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.readAddress(ZooKeeperWrapper.java:318)
> > at
> > 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.readRootRegionLocation(ZooKeeperWrapper.java:231)
> >  at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:442)
> > at java.lang.Thread.run(Thread.java:619)
> > Caused by: org.apache.zookeeper.KeeperException$NoNodeException:
> > KeeperErrorCode = NoNode for /hbase/root-region-server
> > at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
> >  at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> > at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:892)
> >  at
> > 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.readAddressOrThrow(ZooKeeperWrapper.java:328)
> > ... 4 more
> >
> > *and*
> >
> > 2010-01-05 18:49:57,398 WARN org.apache.hadoop.hbase.regionserver.Store:
> > Exception processing reconstruction log
> > hdfs://master.cluster1:9000/hbase/feed/1281791924/oldlogfile.log opening
> > comments -- continuing.  Probably lack-of-HADOOP-1700 causing DATA LOSS!
> > java.io.EOFException
> >        at java.io.DataInputStream.readFully(DataInputStream.java:180)
> >        at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1450)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1428)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1417)
> >        at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1412)
> >        at
> > org.apache.hadoop.hbase.regionserver.Store.doReconstructionLog(Store.java:318)
> >        at
> > 
> org.apache.hadoop.hbase.regionserver.Store.runReconstructionLog(Store.java:267)
> >        at org.apache.hadoop.hbase.regionserver.Store.(Store.java:225)
> >        at
> > 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1500)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:305)
> >        at
> > 
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1621)
> >        at
> > 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1588)
> >        at
> > 
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1508)
> >        at java.lang.Thread.run(Thread.java:619)
> >
> > This last seems particularly strange, b/c HADOOP-1700 was fixed in Hadoop
> > 0.19.
> >
> > Any help on what these exceptions mean and what I can about them would be
> > appreciated.
> >
> > -Marc
> >



      


Mime
View raw message