hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stchu <stchu.cl...@gmail.com>
Subject org.apache.hadoop.hbase.client.RetriesExhaustedException
Date Wed, 22 Jul 2009 02:10:59 GMT
Hi,

Recently I try to import HDFS text files into HBase. The map function read
each line (record) in the files and calculate the index of this record.
The map output is set as: <Key, Value>= <index+"\t"+columnName, record>.
TableReduce is used for reduce function which combine the record in
value.iterator into a String with "," splits and then collect to output. The
map process can complete without any exception or warning. But
during the reduce process, several tasks response the exception as:

===============================================================================================================================================================
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
contact region server Some server for region
TestIndexP,,1248173466044, row '-0.0001_38.8370', but failed after 10
attempts.
Exceptions:

	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:961)
	at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1397)
	at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1341)
	at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1321)
	at icl.atc.ites.hbase.PIndexCreator$TableReducer.reduce(PIndexCreator.java:320)
	at icl.atc.ites.hbase.PIndexCreator$TableReducer.reduce(PIndexCreator.java:258)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:436)
	at org.apache.hadoop.mapred.Child.main(Child.java:158)

================================================================================================================================================================

The job failed finally. 4 machine (1 master+3 slaves) cluster we used and
the size of the source data is more than 10 GB with about 3.3 billions rows.
The size of reduce input is about 3 times of map input. I used Hadoop 0.19.1
and Hbase 0.19.3. And I tried 12 and 53 as the numReduceTask but both
failed. Could anyone give me a help? Thanks a lot.

stchu

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message