hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "HardwareBenchmarks" by johanoskarsson
Date Tue, 04 Dec 2007 12:34:18 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by johanoskarsson:

New page:
= Cluster benchmark =
How does different hardware perform with hadoop?
Hopefully this page can help us answer that question and help new users buy their own based
on our experience.

Please add your own configurations and sort benchmarks below.
Information on how to run sort benchmark at: http://wiki.apache.org/lucene-hadoop/Sort [[BR]]
It basically generates 10gb of random data per node and sorts it.

== Hardware ==
||Cluster name||CPU model||CPU freq||No CPUs||Cores/CPU||RAM||Disk size||Disk interface||Disk
rpm||No disks||Network type||Number of machines||Number of racks||
||Herd1||Intel Xeon LV||2.0ghz||2||2||4gb||250gb||SATA||7200rpm||4||GigE||35||2||
||Herd2||Intel Xeon 5320||1.86ghz||2||4||8gb||750gb||SATA2||7200rpm||4||GigE||10||1||

== Benchmark ==
All benchmarks run with the default randomwriter and sort parameters.

I ran into some odd behavior on Herd2 where if i set the Max tasks / node to 10 instead of
5 the reducers don't start until the mappers finish, slowing the job significantly.

||Cluster name||Version||Sort time s||Mappers||Reducers||Max tasks / node||Speculative ex||Parallel
copies||Sort mb||Sort factor||
||Herd1||0.14.3||3977 s||5600||175||5||Yes||20||200||10||

View raw message