Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1C74AD13B for ; Fri, 12 Oct 2012 08:19:53 +0000 (UTC) Received: (qmail 94526 invoked by uid 500); 12 Oct 2012 08:19:48 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 94434 invoked by uid 500); 12 Oct 2012 08:19:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 94412 invoked by uid 99); 12 Oct 2012 08:19:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Oct 2012 08:19:46 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.210.48] (HELO mail-da0-f48.google.com) (209.85.210.48) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Oct 2012 08:19:40 +0000 Received: by mail-da0-f48.google.com with SMTP id z8so1569214dad.35 for ; Fri, 12 Oct 2012 01:19:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=vbNk5VBZOAGJqpp+RL7JPNe6WG51t5/+Mps9fAKFQlE=; b=iEDq41jM1F6eVxRHC8i3bvB57ASM7MKSah3Qq4JI+AMC5uwBzE3a37WJo+JSDbKbXN DA7IbQJUYma7x5A+yN3xGi3WJrQ7xuCMwB1x/sZmDwXuGMb1JwdWuisUD14vvWKqjBEd R1H4M+MdAUarUyZdewYaaHpeWB9l7poDaBipb24oVQ9ArsF8B96ExBnxhWKx4Z79NOiK X4Rqsugg8Hq9LMznytHqln5z4t/vCkEwxHZYwIRJR9cszfbqxmk0JhUBhAJNhanYwO8G JDulqbmIhbP5NyPAM/h4jnwC432P/+ZxVFNiYBT/I84N+C2jNd75Nn0DGJqlwpTtZJwc qmCQ== MIME-Version: 1.0 Received: by 10.68.125.226 with SMTP id mt2mr11292707pbb.120.1350029958353; Fri, 12 Oct 2012 01:19:18 -0700 (PDT) Received: by 10.68.16.201 with HTTP; Fri, 12 Oct 2012 01:19:17 -0700 (PDT) In-Reply-To: References: Date: Fri, 12 Oct 2012 09:19:17 +0100 Message-ID: Subject: Re: Why they recommend this (CPU) ? From: Steve Loughran To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b2e48221a94ba04cbd858c9 X-Gm-Message-State: ALoCoQmoupn/k4+FWOiJX+hJQGE+31cRZZEp/uIACVbhxiMLT9oWu2eiobyqNyrzBH8vCTG61hy5 --047d7b2e48221a94ba04cbd858c9 Content-Type: text/plain; charset=UTF-8 On 11 October 2012 20:47, Goldstone, Robin J. wrote: > Be sure you are comparing apples to apples. The E5-2650 has a larger > cache than the E5-2640, faster system bus and can support faster (1600Ghz > vs 1333Ghz) DRAM resulting in greater potential memory bandwidth. > > http://ark.intel.com/compare/64590,64591 > > mmm. There is more $L3, and in-CPU sync can be done better than over the inter-socket bus -you're also less vulnerable to NUMA memory allocation issues (*). There's another issue that drives these recommendations, namely the price curve that server parts follow over time, the Bill-of-Materials curve, aka the "BOM Curve". Most parts come in at one price, and that price drops over time as a function of: volume parts shipped covering Non-Recoverable-Engineering (NRE costs), improvements in yield and manufacturing quality in that specific process, ...etc), until it levels out a actual selling price (ASP) to the people who make the boxes (Original Design Manufacturers==ODMs) where it tends to stay for the rest of that part's lifespan. DRAM, HDDs follow a fairly predictable exponential decay curve. You can look at the cost of a part, it's history, determine the variables and then come up with a prediction of how much it will cost at a time in the near future. It's these BOM curves that was key to Dell's business model -direct sales to customer meant they didn't need so much inventory and could actually get into a situation where they had the cash from the customer before the ODM had built the box, let alone been paid for it. There was a price: utter unpredictability of what DRAM and HDDs you were going to get. Server-side things have stabilised and all the tier-1 PC vendors qualify a set of DRAM and storage options, so they can source from multiple vendors, so eliminating a single vendor as a SPOF and allowing them to negotiate better on cost of parts -which again changes that BOM curve. This may seem strange but you should all know that the retail price of a laptop, flatscreen TV, etc comes down over time -what's not so obvious are the maths behind the changes in it's price. One of the odd parts in this business is the CPU. There is a near-monopoly in supplies, and intel don't want their business at the flat bit of the curve. They need the money not just to keep their shareholders happy, but for the $B needed to build the next generation of Fabs and hence continue to keep their shareholders happy in future. Intel parts come in high when they initially ship, and stay at that price until the next time Intel change their price list, which is usually quarterly. The first price change is very steep, then the gradient d$/dT reduces, as it gets low enough that part drops off the price list never to be seen again, except maybe in embedded designs. What does that mean? It means you pay a lot for the top of the line x86 CPUs, and unless you are 100% sure that you really need it, you may be better off investing your money in: -more DRAM with better ECCs (product placement: Chip-kill), buffering, : less swapping, ability to run more reducers/node. -more HDDs : more storage in same #of racks, assuming your site can take the weight. -SFF HDDs : less storage but more IO bandwidth off the disks. -SSD: faster storage -GPUs: very good performance for algorithms you can recompile onto them -support from Hortonworks to can keep your Hadoop cluster going. -10 GbE networking, or multiple bonded 1GbE -more servers (this becomes more of a factor on larger clusters, where the cost savings of the less expensive parts scale up) -paying the electricity bill. -keeping the cost of building up a hadoop cluster down, so making it more affordable to store PB of data whose value will only appreciate over time. -paying your ops team more money, keeping them happier and so increasing the probability they will field the 4am support crisis. That's why it isn't clear cut that 8 cores are better. It's not just a simple performance question -it's the opportunity cost of the price difference scaled up by the number of nodes. You do -as Ted pointed out- need to know what you actually want. Finally, as a basic "data science" exercise for the reader: 1. calculate the price curves of, say, a Dell laptop, and compare with the price curve of an apple laptop introduced with the same CPU and at the same time. Don't look at the absolute values -normalising them to a percentage is better to view. 2. Look at which one follows a soft gradient and which follows more of a step function. 3. add to the graph the intel pricing and see how that correlates with the ASP. 4. Determine from this which vendor has the best margins -not just at time of release, but over the lifespan of a product. Integration is a useful technique here. Bear in mind Apple's NRE costs on laptop are higher due to the better HW design but also the software development is only funded from their sales alone. 5. Using this information, decide when is the best time to buy a dell or an apple laptop. I should make a blog post of this, "server prices: it's all down to the exponential decay equations of the individual parts" Steve "why yes, I have spent time in the PC industry" Loughran (*) If you don't know what NUMA this is, do some research and think about its implications in heap allocation. > > From: Patrick Angeles > Reply-To: "user@hadoop.apache.org" > Date: Thursday, October 11, 2012 12:36 PM > To: "user@hadoop.apache.org" > Subject: Re: Why they recommend this (CPU) ? > > If you look at comparable Intel parts: > > Intel E5-2640 > 6 cores @ 2.5 Ghz > 95W - $885 > > Intel E5-2650 > 8 cores @ 2.0 Ghz > 95W - $1107 > > So, for $400 more on a dual proc system -- which really isn't much -- > you get 2 more cores for a 20% drop in speed. I can believe that for some > scenarios, the faster cores would fare better. Gzip compression is one that > comes to mind, where you are aggressively trading CPU for lower storage > volume and IO. An HBase cluster is another example. > > On Thu, Oct 11, 2012 at 3:03 PM, Russell Jurney wrote: > >> My own clusters are too temporary and virtual for me to notice. I >> haven't thought of clock speed as having mattered in a long time, so I'm >> curious what kind of use cases might benefit from faster cores. Is there a >> category in some way where this sweet spot for faster cores occurs? >> >> Russell Jurney http://datasyndrome.com >> >> On Oct 11, 2012, at 11:39 AM, Ted Dunning wrote: >> >> You should measure your workload. Your experience will vary >> dramatically with different computations. >> >> On Thu, Oct 11, 2012 at 10:56 AM, Russell Jurney < >> russell.jurney@gmail.com> wrote: >> >>> Anyone got data on this? This is interesting, and somewhat >>> counter-intuitive. >>> >>> Russell Jurney http://datasyndrome.com >>> >>> On Oct 11, 2012, at 10:47 AM, Jay Vyas wrote: >>> >>> > Presumably, if you have a reasonable number of cores - speeding the >>> cores up will be better than forking a task into smaller and smaller chunks >>> - because at some point the overhead of multiple processes would be a >>> bottleneck - maybe due to streaming reads and writes? I'm sure each and >>> every problem has a different sweet spot. >>> >> >> > --047d7b2e48221a94ba04cbd858c9 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

On 11 October 2012 20:47, Goldstone, Rob= in J. <goldstone1@llnl.gov> wrote:
Be sure you are comparing apples to apples. =C2=A0The E5-2650 has a la= rger cache than the E5-2640, faster system bus and can support faster (1600= Ghz vs 1333Ghz) DRAM resulting in greater potential memory bandwidth.



mmm. There is more $L= 3, and in-CPU sync can be done better than over the inter-socket bus -you&#= 39;re also less vulnerable to NUMA memory allocation issues (*).=C2=A0

There's another issue that drives these recommendat= ions, namely the price curve that server parts follow over time, the Bill-o= f-Materials curve, aka the "BOM Curve". Most parts come in at one= price, and that price drops over time as a function of: volume parts shipp= ed covering Non-Recoverable-Engineering (NRE costs), improvements in yield = and manufacturing quality in that specific process, ...etc), until it level= s out a actual selling price (ASP) to the people who make the boxes (Origin= al Design Manufacturers=3D=3DODMs) where it tends to stay for the rest of t= hat part's lifespan.

DRAM, HDDs follow a fairly predictable exponential deca= y curve. You can look at the cost of a part, it's history, determine th= e variables and then come up with a prediction of how much it will cost at = a time in the near future. It's these BOM curves that was key to Dell&#= 39;s business model -direct sales to customer meant they didn't need so= much inventory and could actually get into a situation where they had the = cash from the customer before the ODM had built the box, let alone been pai= d for it. There was a price: utter unpredictability of what DRAM and HDDs y= ou were going to get. Server-side things have stabilised and all the tier-1= PC vendors qualify a set of DRAM and storage options, so they can source f= rom multiple vendors, so eliminating a single vendor as a SPOF and allowing= them to negotiate better on cost of parts -which again changes that BOM cu= rve.

This may seem strange but you should all know that the = retail price of a laptop, flatscreen TV, etc comes down over time -what'= ;s not so obvious are the maths behind the changes in it's price.=C2=A0=

One of the odd parts in this business is the CPU. There= is a near-monopoly in supplies, and intel don't want their business at= the flat bit of the curve. They need the money not just to keep their shar= eholders happy, but for the $B needed to build the next generation of Fabs = and hence continue to keep their shareholders happy in future. Intel parts = come in high when they initially ship, and stay at that price until the nex= t time Intel change their price list, which is usually quarterly. The first= price change is very steep, then the gradient d$/dT reduces, as it gets lo= w enough that part drops off the price list never to be seen again, except = maybe in embedded designs.=C2=A0

What does that mean? It means you pay a lot for the top= of the line x86 CPUs, and unless you are 100% sure that you really need it= , you may be better off investing your money in:
=C2=A0-more DRAM= with better ECCs (product placement: Chip-kill), buffering, : less swappin= g, ability to run more reducers/node.
=C2=A0-more HDDs : more storage in same #of racks, assuming your site = can take the weight.
=C2=A0-SFF HDDs : less storage but more IO b= andwidth off the disks.
=C2=A0-SSD: faster storage
=C2= =A0-GPUs: very good performance for algorithms you can recompile onto them<= /div>
=C2=A0-support from Hortonworks to can keep your=C2=A0Hadoop cluster g= oing.
=C2=A0-10 GbE networking, or multiple bonded 1GbE
=C2=A0-more servers (this becomes more of a factor on larger clusters, whe= re the cost savings of the less expensive parts scale up)
=C2=A0-paying the electricity bill.
=C2=A0-keeping the cost = of building up a hadoop cluster down, so making it more affordable to store= PB of data whose value will only appreciate over time.
=C2=A0-pa= ying your ops team more money, keeping them happier and so increasing the p= robability they will field the 4am support crisis.

That's why it isn't clear cut that 8 cores are = better. It's not just a simple performance question -it's the oppor= tunity cost of the price difference scaled up by the number of nodes. You d= o -as Ted pointed out- need to know what you actually want.

Finally, as a basic "data science" exercise f= or the reader:=C2=A0

1. calculate the price curves= of, say, a Dell laptop, and compare with the price curve of an apple lapto= p introduced with the same CPU and at the same time. Don't look at the = absolute values -normalising them to a percentage is better to view.
2. Look at which one follows a soft gradient and which follows more of= a step function.=C2=A0
3. add to the graph the intel pricing and= see how that correlates with the ASP.
4. Determine from this whi= ch vendor has the best margins -not just at time of release, but over the l= ifespan of a product. Integration is a useful technique here. Bear in mind = Apple's NRE costs on laptop are higher due to the better HW design but = also the software development is only funded from their sales alone.
5. Using this information, decide when is the best time to buy a dell = or an apple laptop.


I should make a= blog post of this, "server prices: it's all down to the exponenti= al decay equations of the individual parts"

Steve "why yes, I have spent time in the PC indust= ry" Loughran



(*= ) If you don't know what NUMA this is, do some research and think about= its implications in heap allocation.

=C2=A0

From: Patrick Angeles <patrick@cloudera.com= >
Reply-To: "user@hadoop.apache.org" &= lt;user@hadoop.= apache.org>
Date: Thursday, October 11, 2012 12= :36 PM
To: "user@hadoop.apache.org" <user@hadoop.apache= .org>
Subject: Re: Why they recommend thi= s (CPU) ?

If you look at comparable Intel parts:

Intel E5-2640
6 cores @ 2.5 Ghz
95W - $885

Intel E5-2650
8 cores @ 2.0 Ghz
95W -=C2=A0$1107

So, for $400 more on a dual proc system -- which really isn't much= -- you get 2 more cores for a 20% drop in speed. I can believe that for so= me scenarios, the faster cores would fare better. Gzip compression is one t= hat comes to mind, where you are aggressively trading CPU for lower storage volume and IO. An HBase cluster is another e= xample.

On Thu, Oct 11, 2012 at 3:03 PM, Russell Jurney = <russell.j= urney@gmail.com> wrote:
My own clusters are too temporary and virtual for me to notice. I have= n't thought of clock speed as having mattered in a long time, so I'= m curious what kind of use cases might benefit from faster cores. Is there = a category in some way where this sweet spot for faster cores occurs?

Russell Jurney http:/= /datasyndrome.com

On Oct 11, 2012, at 11:39 AM, Ted Dunning <tdunning@maprtech.com> wrote:

You should measure your workload. =C2=A0Your experience will vary dram= atically with different computations.

On Thu, Oct 11, 2012 at 10:56 AM, Russell Jurney= <russell.j= urney@gmail.com> wrote:
Anyone got data on this? This is interesting, and somewhat counter-intuitiv= e.

Russell Jurney http:/= /datasyndrome.com

On Oct 11, 2012, at 10:47 AM, Jay Vyas <jayunit100@gmail.com> wrote:

> Presumably, if you have a reasonable number of cores - speeding the co= res up will be better than forking a task into smaller and smaller chunks -= because at some point the overhead of multiple processes would be a bottle= neck - maybe due to streaming reads and writes? =C2=A0I'm sure each and every problem has a different swee= t spot.



--047d7b2e48221a94ba04cbd858c9--