hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Hendler <hendler...@yahoo.com>
Subject Re: commodity vs. high perf machines: which would you rather
Date Wed, 07 Nov 2007 20:39:33 GMT
I like these high level questions, as I have yet do do an actual
install, but being an utter newbie, maybe my perspective will contribute
something.

I'd say option 3, which is:
1. One failsafe, strong machine for a master.
2. As many "commodity machines" as you can muster. With an emphasis on
RAM and disk IO.

My justification is that since the architecture is not P2P and requires
Master to do all the scheduling. Of course, I could not understand
something, and master might actually work fine as a weaker machine,
which only does scheduling and routing, while the workers need to be
bulky. I don't know the answer, but I suspect that kind of thinking
might point you in the right direction.

Another of other factors also get left out of decisions like this:
1. long term planning - how much would you scale in the future? Is this
just a demo, or the production environment?
2. network environment - what kind of network will the machines be
living in?
3. What is the MapReduce algorithm you are doing? Normally disk i/o
bound, but maybe you're doing something "difficult" for CPU.

I see from other posts ECC is important, and in general RAM can't hurt.
An SSD would be nice too, to help with disk i/o.

HTH

- Jonathan

Chris Fellows wrote:
> Hello,
>
> Much of the hadoop documentation speaks to large clusters of commodity machines. There
is a debate on our end about which would be better: a small number of high performance machines
(2 boxes with 4 quad core processors) or X number of commodity machines. I feel that disk
I/O might be the bottle neck with the 2 high perf machines (though I did just read in the
FAQ about being able to split the dfs-data across multiple drives).
>
> So this is a "which would rather" question. If you were setting up a cluster of machines
to perform data rollups/aggregation (and other mapred tasks) on files in the .25-1TB size,
which would rather have:
>
> 1. 2 4 quad core machines with your choice on RAM and number of drives
> 2. 10 (or more) commodity machines (as defined on the hadoop wiki)
>
> And of course a "why?" would be very helpful.
>
> Thanks!
>
>
>   


Mime
View raw message