giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Franco Maria Nardini <francomaria.nard...@isti.cnr.it>
Subject memory problem
Date Thu, 13 Sep 2012 21:01:36 GMT
Hi all,

I am running the PageRank code on single machine hadoop installation.
In particular, I am running the code using four workers on a graph of
100.000 nodes. I am getting this exception:

012-09-13 22:54:38,379 INFO org.apache.hadoop.mapred.Task:
Communication exception: java.lang.OutOfMemoryError: GC overhead limit
exceeded
	at org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessList(ProcfsBasedProcessTree.java:365)
	at org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:138)
	at org.apache.hadoop.util.LinuxResourceCalculatorPlugin.getProcResourceValues(LinuxResourceCalculatorPlugin.java:401)
	at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:808)
	at org.apache.hadoop.mapred.Task.updateCounters(Task.java:830)
	at org.apache.hadoop.mapred.Task.access$600(Task.java:66)
	at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:666)
	at java.lang.Thread.run(Thread.java:662)

It's seems to be a problem related to communication stuff. Do I have
to specify or modify some options to hadoop to avoid this?

Best,

Franco Maria

Mime
View raw message