hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: HBase breaks with NotReplicatedYetException after inserting some 100,000 rows
Date Mon, 22 Dec 2008 23:38:53 GMT
Max Lehn wrote:
> Hi.
>
> I'm running HBase in a small distributed setup (3 machines). When I
> import "larger" sets of data (around 700,000 rows within 20 minutes),
> HBase eventually breaks and the logs show messages like


Which hbase version.  How big are your inserts?  How many regions do you 
have loaded when it starts to go wonky?

>
> 2008-12-17 13:56:20,235 INFO org.apache.hadoop.hdfs.DFSClient:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /hbase/log_10.49.21.176_1229549686892_60020/hlog.dat.1229550980090
> could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1270)

>
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351) 
>
> (...)
>
Usually this indicates dead datanodes ("...could only be replicated to 0 
nodes...").  Check their logs?  Are they up?   Can you do hdfs 
operations like list directories?


> ......
>
> but the other Hbase logs do not have any suspicious entries.
>
> Does anyone know what this could be? I'm still new to Hadoop/HBase so
> I don't really have an idea about what could be wrong.

See troubleshooting, particularly the last two entries if on 
hbase/hadoop 0.18.x.

Also make sure you have upped your file descriptors (see FAQ for how).

Also check your hbase out logs.  Might have OOMEs buried there.

Come back if none of above works for you.

Thanks,
St.Ack

Mime
View raw message