hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sirota, Peter" <sir...@amazon.com>
Subject RE: Hardware inquiry
Date Fri, 05 Feb 2010 18:59:00 GMT
Hi Justin,

Have you guys considered running inside Amazon Elastic MapReduce?  With this service you don't
have to choose your hardware across all jobs but rather pic out of 7 hardware types we have
available.  Also you don't have to pay capital upfront but rather scale with your needs.

Let me know if we can help you to get started with Amazon Elastic MapReduce.   http://aws.amazon.com/elasticmapreduce/





Regards,
Peter Sirota
GM, Amazon Elastic MapReduce

-----Original Message-----
From: Justin Becker [mailto:becker.justin@gmail.com] 
Sent: Wednesday, February 03, 2010 5:15 PM
To: common-user@hadoop.apache.org
Subject: Hardware inquiry

My organization has decided to make a substantial investment in hardware for
processing Hadoop jobs.  Our cluster will be used by multiple groups so its
hard to classify the problems as IO, memory, or CPU bound.  Would others be
willing to share their hardware profiles coupled with the problem types
(memory, cpu, etc.).  Our current setup, for the existing cluster is made up
of the following machines,

Poweredge 1655
2x2 Intel Xeon 1.4ghz
2GB RAM
72GB local HD

Poweredge 1855
2x2 Intel Xeon 3.2ghz
8GB RAM
146GB local HD

Poweredge 1955
2x2 Intel Xeon 3.0ghz
4GB RAM
72GB local HD

Obviously, we would like to increase local disk space, memory, and the
number of cores.  The not-so-obvious decision is wether to select high end
equipment (fewer machines) or lower-class hardware.  We're trying to balance
"how commodity" against the administration costs.  I've read the machine
scaling material on the Hadoop wiki.  Any additional real-world advice would
be awesome.


Thanks,

Justin

Mime
View raw message