hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chih-Hsien Wu <chjaso...@gmail.com>
Subject Hadoop 1.2.1 corrupt after restart from out of heap memory exception
Date Wed, 23 Oct 2013 19:20:08 GMT
I uploaded data into distributed file system. Cluster summary shows there
is enough heap size memory. However, whenever I try run Mahout 0.8 command.
The system displays out of heap memory exception. I shutdown hadoop cluster
and allocated more memory to mapred.child.java.opts. I then restarted the
hadoop cluster and the namenode is corrupted. Any help is appreciated.

View raw message