hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: starting hbase after losing data on hadoop namenode
Date Thu, 04 Mar 2010 20:21:28 GMT
If you run fsck on your hdfs, whats it say?

On Thu, Mar 4, 2010 at 12:19 PM, mike anderson <saidtherobot@gmail.com> wrote:
> Yesterday my namenode went down because of  a lack of hard disk space.
> I was able to get the namenode started again by removing the edits.new
> file and sacrificing some of the data. However, I believe Hbase still
> thinks this data exists, as is evident from these types of entries in
> the master's log when I start it up:
> 2010-03-04 15:13:41,477 INFO org.apache.hadoop.hdfs.DFSClient: Could
> not obtain block blk_-4089156066204357637_60829 from any node:
> java.io.IOException: No live nodes contain current block
> Which tools should I use to fix these problems? compact? Or will hbase
> fix itself?
> Thanks,
> Mike

View raw message