hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <michael_se...@hotmail.com>
Subject RE: More cores Vs More Nodes ?
Date Wed, 14 Dec 2011 17:05:48 GMT


Brian,

I think you missed my point.

The moment you go and design a cluster for a specific job, you end up getting fscked because
there's another group who wants to use the shared resource for their job which could be orthogonal
to the original purpose. It happens everyday.

This is why you have to ask if the cluster is being built for a specific purpose. Meaning
answering the question 'Which of the following best describes your cluster: 
a) PoC
b) Development
c) Pre-prod
d) Production
e) Secondary/Backup
"

Note that sizing the cluster is a different matter. 
Meaning if you know you need a PB of storage, you're going to design the cluster differently
because once you get to a certain size, you have to recognize that your clusters are going
to have lots of disk, require 10GBe just for the storage. Number of cores would be less of
an issue, however again look at pricing. 2 socket 8 core Xeon MBs are currently at an optimal
price point. 

And again this goes back to the point I was trying to make.
You need to look beyond the number of cores as a determining factor. 
You go too small, you're going to take a hit because of the price/performance curve. 
(Remember that you have to consider Machine Room real estate. 100 2 core boxes take up much
more space than 25 8 core boxes)

If you go to the other extreme... 64 core giant SMP box $$$$$ for $$$ (less money) build out
an 8 node cluster. 

Beyond that, you really, really don't want to build a custom cluster for a specific job unless
you know that you're going to be running that specific job or set of jobs (24x7X365) [And
yes, I came across such a use case...]

HTH

-Mike
> From: bbockelm@cse.unl.edu
> Subject: Re: More cores Vs More Nodes ?
> Date: Wed, 14 Dec 2011 07:41:25 -0600
> To: common-user@hadoop.apache.org
> 
> Actually, there are varying degrees here.
> 
> If you have a successful project, you will find other groups at your door wanting to
use the cluster too.  Their jobs might be different from the original use case.
> 
> However, if you don't understand the original use case ("CPU heavy or storage heavy?"
is a great beginning question), your original project won't be successful.  Then there will
be no follow-up users because you failed.
> 
> So, you want to have a reasonably general-purpose cluster, but make sure it matches well
with the type of jobs.  As an example, we had one group who required an estimated CPU-millenia
per byte of data… they needed a "general purpose cluster" for a certain value of "general
purpose".
> 
> Brian
> 
> On Dec 14, 2011, at 7:29 AM, Michael Segel wrote:
> 
> > 
> > Aw Tommy, 
> > Actually no. You really don't want to do this.
> > 
> > If you actually ran a cluster and worked in the real world, you would find that
if you purposely build a cluster for one job, there will be a mandate that some other group
needs to use the cluster and that their job has different performance issues and your cluster
is now suboptimal for their jobs...
> > 
> > Perhaps you meant that you needed to think about the purpose of the cluster? That
is do you want to minimize the nodes but maximize the disk space per node and use the cluster
as your backup cluster? (Assuming that you are considering your DR and BCP in your design.)
> > 
> > The problem with your answer, is that a job has a specific meaning within the Hadoop
world.  You should have asked what is the purpose of the cluster. 
> > 
> > I agree w Brad, that it depends ... 
> > 
> > But the factors which will impact your cluster design are more along the lines of
the purpose of the cluster and then the budget along with your IT constraints.
> > 
> > IMHO its better to avoid building purpose built clusters. You end up not being able
to easily recycle the hardware in to new clusters easily. 
> > 
> > But hey what do I know? ;-)
> > 
> >> To: common-user@hadoop.apache.org
> >> Subject: RE: More cores Vs More Nodes ?
> >> From: tdeutsch@us.ibm.com
> >> Date: Tue, 13 Dec 2011 09:46:49 -0800
> >> 
> >> It also helps to know the profile of your job in how you spec the 
> >> machines. So in addition to Brad's response you should consider if you 
> >> think your jobs will be more storage or compute oriented. 
> >> 
> >> ------------------------------------------------
> >> Tom Deutsch
> >> Program Director
> >> Information Management
> >> Big Data Technologies
> >> IBM
> >> 3565 Harbor Blvd
> >> Costa Mesa, CA 92626-1420
> >> tdeutsch@us.ibm.com
> >> 
> >> 
> >> 
> >> 
> >> Brad Sarsfield <brad@bing.com> 
> >> 12/13/2011 09:41 AM
> >> Please respond to
> >> common-user@hadoop.apache.org
> >> 
> >> 
> >> To
> >> "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
> >> cc
> >> 
> >> Subject
> >> RE: More cores Vs More Nodes ?
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> Praveenesh,
> >> 
> >> Your question is not naïve; in fact, optimal hardware design can 
> >> ultimately be a very difficult question to answer on what would be 
> >> "better". If you made me pick one without much information I'd go for more 
> >> machines.  But...
> >> 
> >> It all depends; and there is no right answer.... :) 
> >> 
> >> More machines 
> >>                 +May run your workload faster
> >>                 +Will give you a higher degree of reliability protection 
> >> from node / hardware / hard drive failure.
> >>                 +More aggregate IO capabilities
> >>                 - capex / opex may be higher than allocating more cores
> >> More cores 
> >>                 +May run your workload faster
> >>                 +More cores may allow for more tasks to run on the same 
> >> machine
> >>                 +More cores/tasks may reduce network contention and 
> >> increase increasing task to task data flow performance.
> >> 
> >> Notice "May run your workload faster" is in both; as it can be very 
> >> workload dependant.
> >> 
> >> My Experience:
> >> I did a recent experiment and found that given the same number of cores 
> >> (64) with the exact same network / machine configuration; 
> >>                 A: I had 8 machines with 8 cores 
> >>                 B: I had 28 machines with 2 cores (and 1x8 core head 
> >> node)
> >> 
> >> B was able to outperform A by 2x using teragen and terasort. These 
> >> machines were running in a virtualized environment; where some of the IO 
> >> capabilities behind the scenes were being regulated to 400Mbps per node 
> >> when running in the 2 core configuration vs 1Gbps on the 8 core.  So I 
> >> would expect the non-throttled scenario to work even better. 
> >> 
> >> ~Brad
> >> 
> >> 
> >> -----Original Message-----
> >> From: praveenesh kumar [mailto:praveenesh@gmail.com] 
> >> Sent: Monday, December 12, 2011 8:51 PM
> >> To: common-user@hadoop.apache.org
> >> Subject: More cores Vs More Nodes ?
> >> 
> >> Hey Guys,
> >> 
> >> So I have a very naive question in my mind regarding Hadoop cluster nodes 
> >> ?
> >> 
> >> more cores or more nodes - Shall I spend money on going from 2-4 core 
> >> machines, or spend money on buying more nodes less core eg. say 2 machines 
> >> of 2 cores for example?
> >> 
> >> Thanks,
> >> Praveenesh
> >> 
> >> 
> > 		 	   		  
> 
 		 	   		  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message