hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vasilis Liaskovitis <vlias...@gmail.com>
Subject hadoop idle time on terasort
Date Wed, 02 Dec 2009 20:22:12 GMT
Hi,

I am using hadoop-0.20.1 to run terasort and randsort benchmarking
tests on a small 8-node linux cluster. Most runs consist of usually
low (<50%) core utilizations in the map and reduce phase, as well as
heavy I/O phases . There is usually a large fraction of runtime for
which cores are idling and i/o disk traffic is not heavy.

On average for the duration of a terasort run I get 20-30% cpu
utilization, 10-30% iowait times and the rest 40-70% is idle time.
This is data collected with mpstat for the duration of the run across
the cores of a specific node. This utilization behaviour is true and
symmetric for all tasktracker/data nodes (The namenode cores and I/O
are mostly idle, so there doesn’t seem to be a bottleneck in the
namenode).

I am looking for an explanation for the significant idle-time in the
runs. Could it have something to do with misconfigured network/RPC
latency hadoop paremeters? For example, I have tried to increase
mapred.heartbeats.in.second to 1000 from 100 but that didn’t help. The
network bandwidth (1Gige card on each node) is not saturated during
the runs, according to my netstat results.

Have other people noticed significant cpu idle times that can’t be
explained by I/O traffic?

Is it reasonable to always expect decreasing idle times as the
terasort dataset scales on the same cluster? I ‘ve only tried 2 small
datasets of 40GB and 64GB each, but core utilizations didn’t increase
with the runs done so far.

Yahoo’s paper on terasort (http://sortbenchmark.org/Yahoo2009.pdf)
mentions several performance optimizations, some of which seem
relevant to idle times. I am wondering which, if any, of the yahoo
patches are part of the hadoop-0.20.1 distribution.

Would it be a good idea to try a development version of hadoop to
resolve this issue?

thanks,

- Vasilis

Mime
View raw message