Return-Path: X-Original-To: apmail-giraph-dev-archive@www.apache.org Delivered-To: apmail-giraph-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 907689B14 for ; Wed, 29 May 2013 19:16:42 +0000 (UTC) Received: (qmail 62238 invoked by uid 500); 29 May 2013 19:16:42 -0000 Delivered-To: apmail-giraph-dev-archive@giraph.apache.org Received: (qmail 62209 invoked by uid 500); 29 May 2013 19:16:42 -0000 Mailing-List: contact dev-help@giraph.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@giraph.apache.org Delivered-To: mailing list dev@giraph.apache.org Received: (qmail 62201 invoked by uid 99); 29 May 2013 19:16:42 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 May 2013 19:16:42 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW X-Spam-Check-By: apache.org Received-SPF: unknown (nike.apache.org: error in processing during lookup of prvs=986124b3a8=majakabiljo@fb.com) Received: from [67.231.145.42] (HELO mx0a-00082601.pphosted.com) (67.231.145.42) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 May 2013 19:16:35 +0000 Received: from pps.filterd (m0044012 [127.0.0.1]) by mx0a-00082601.pphosted.com (8.14.5/8.14.5) with SMTP id r4TJFVTk011482 for ; Wed, 29 May 2013 12:16:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : subject : date : message-id : in-reply-to : content-type : content-id : content-transfer-encoding : mime-version; s=facebook; bh=Weu61CfjrjI/fCOBT6MedVn889zH+lhy59pLnD0y35U=; b=gj9xnMYLFeRy/M7iHGhUcTYsH3PzUr/nIY+lnsBWcGNAby3UeI7HM19VV624/LN6xqzd bDK7G62R+4tCRFkUWfDS5D1dEbkSFavDM4xXMmOAA15SwYPQBr5qVl71ebbcg2+4p4Pa wiJiciOn97G8pZljzwwPFMP3JmlNP9unBeA= Received: from mail.thefacebook.com (prn1-cmdf-dc01-fw1-nat.corp.tfbnw.net [173.252.71.129] (may be forged)) by mx0a-00082601.pphosted.com with ESMTP id 1cn1nau4j2-1 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK) for ; Wed, 29 May 2013 12:16:12 -0700 Received: from PRN-MBX01-2.TheFacebook.com ([169.254.4.207]) by PRN-CHUB04.TheFacebook.com ([fe80::7ded:c10e:ef04:80d8%12]) with mapi id 14.02.0328.011; Wed, 29 May 2013 12:16:10 -0700 From: Maja Kabiljo To: "dev@giraph.apache.org" Subject: Re: shared aggregators Thread-Topic: shared aggregators Thread-Index: AQHOXIhAsDl3NJTL4UahbaxIZa31IZkczc2A//+aj4CAAJGSAP//jvMA Date: Wed, 29 May 2013 19:16:09 +0000 Message-ID: <1F592C080E9ACB4CB1C9EA1865BF3EFA0D1A0801@PRN-MBX01-2.TheFacebook.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.16.4] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.10.8626,1.0.431,0.0.0000 definitions=2013-05-29_05:2013-05-29,2013-05-29,1970-01-01 signatures=0 X-Virus-Checked: Checked by ClamAV on apache.org Feel free to expand the explanation on the website :-) On 5/29/13 12:00 PM, "Claudio Martella" wrote: >Yes totally. Thanks. > > >On Wed, May 29, 2013 at 7:19 PM, Maja Kabiljo wrote: > >> Claudio, >> >> First, the total number of messages is the same, since worker won't be >> sending aggregators to itself, so we have K + (N-1)*K. >> >> If master would be sending aggregators to all workers, it would have to >> send K*N amount of data. This way master only sends K, and then as Greg >> said all workers work in parallel - each sending additional K (or more >> precisely K/N * (N-1)). >> >> Does this make it clearer? >> >> Maja >> >> On 5/29/13 9:22 AM, "Greg Malewicz" wrote: >> >> >Consider parallel time. >> > >> >Greg >> > >> >On 5/29/13 9:08 AM, "Claudio Martella" >> wrote: >> > >> >>Hi, >> >> >> >>I have a question about the design of shared aggregators. >> >>Documentation says: >> >> >> >>"After MasterCompute.compute, master doesn't do the distribution of >>all >> >>aggregators to all workers, but aggregators again have their owners. >> >>Master >> >>only sends each aggregator to its owner, and then each worker >>distributes >> >>the aggregators which it owns to all other workers." >> >> >> >>Why are the aggregator values not sent directly by the master to all >> >>workers, instead of doing the two hops it does now? >> >> >> >>Suppose I have K aggregators and N workers, the current design >>requires K >> >>+ >> >>K*N messages. >> >>If the master would send the aggregators values to all the workers >> >>directly, we would have K*N messages, or actually N messages with K >> >>values >> >>each. >> >> >> >>Am I missing something? >> >> >> >>Best, >> >>Claudio >> >> >> >>-- >> >> Claudio Martella >> >> claudio.martella@gmail.com >> > >> >> > > >--=20 > Claudio Martella > claudio.martella@gmail.com