hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christian Schäfer <syrious3...@yahoo.de>
Subject Re: HBase upgrade
Date Mon, 25 Jun 2012 09:53:19 GMT


I wouldn't work on the hbase problems if hdfs isn't working properly

keep on the hdfs logs firstly.


BLOCK* BlockInfoUnderConstruction.initLeaseRecovery: No blocks found, lease removed.

DIR* NameSystem.internalReleaseLease: File /hbase/.logs/*******,60020,1340364529713-splitting/********%2C60020%2C1340364529713.1340366765952
has not been closed. Lease recovery is in progress. RecoveryId = 222850 for block blk_-643079438075615295_216548{blockUCState=UNDER_RECOVERY,
primaryNodeIndex=-1, replicas=[]}

As I'm not an HDFS expert I can only suggest to format the hdfs if there is only test data
on it.

If the data must be preserved you likely need advice of one of the hadoop pros on the mailing

Please always append your posts so that every post contains the initial question.
Otherwise help will be sparse.

Von: Simon <heeg.simon@googlemail.com>
An: scm-users@cloudera.org 
CC: Christian Schäfer <syrious3000@yahoo.de> 
Gesendet: 11:31 Montag, 25.Juni 2012
Betreff: Re: HBase upgrade

I created a
new table called test and it is also not shard with a region server, so I tried:

assign 'test'

ERROR: java.net.SocketTimeoutException: Call to hadooplog2.*******/ failed
on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting
for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/

I checked the ports, but they are all open....

View raw message