Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 664BCD016 for ; Wed, 26 Dec 2012 08:52:41 +0000 (UTC) Received: (qmail 83674 invoked by uid 500); 26 Dec 2012 08:52:36 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 83407 invoked by uid 500); 26 Dec 2012 08:52:36 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 83372 invoked by uid 99); 26 Dec 2012 08:52:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Dec 2012 08:52:34 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.223.177 as permitted sender) Received: from [209.85.223.177] (HELO mail-ie0-f177.google.com) (209.85.223.177) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Dec 2012 08:52:29 +0000 Received: by mail-ie0-f177.google.com with SMTP id k13so10364461iea.36 for ; Wed, 26 Dec 2012 00:52:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=ngOrKJvk40CQl57Ej++BmTzLlawq7YZSzFWugYwHEjs=; b=Xd1O5Em0NAP4icMa0IxGvIJUcjVwz+fPO+YMWv0qCui8NpoF3hGZOjTS3bS+ljXfyw 3r2q6IvQquqhs1NchMkHYyBnDUc++TGTkb5T3ReY/rqIUm7hS6I9NFv10H5IXnLZ3JRf QGfBt6T9eGoK1Mm9RxVyAKO6dZiDBqUa3wQJUlYHEEzwZJi/rgzdyHodCjirqJI9seL3 n3BMTvRP0JUKosN1EbLT5YJyOMG/32HGd6Lgy0wIg3XqVR6w1QJgLKPpS+KRBGdUlWHg Uaha4rS2NyEJnqewozyUwtDAwC0zUhB9Hpdj47lGGcSRzafpSoq3mVRGqddJouRYsMx/ PPLg== Received: by 10.50.12.138 with SMTP id y10mr23688799igb.58.1356511928552; Wed, 26 Dec 2012 00:52:08 -0800 (PST) MIME-Version: 1.0 Received: by 10.64.6.129 with HTTP; Wed, 26 Dec 2012 00:51:48 -0800 (PST) In-Reply-To: References: <491FA550-FC92-4280-8FB5-186E5F7A4743@123.org> From: Harsh J Date: Wed, 26 Dec 2012 14:21:48 +0530 Message-ID: Subject: Re: distributed cache To: "" Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQm3UuvLzdpi+Ks+X5w0yGaL87U3xv3jHWJ6VneZYl6sM63UdJlHzyW23VNaiMO2N29pzndo X-Virus-Checked: Checked by ClamAV on apache.org Hi Lin, DistributedCache files are stored onto the HDFS by the client first. The TaskTrackers download and localize it. Therefore, as with any other file on HDFS, "downloads" can be efficiently parallel with higher replicas. The point of having higher replication for these files is also tied to the concept of racks in a cluster - you would want more replicas spread across racks such that on task bootup the downloads happen with rack locality. On Sat, Dec 22, 2012 at 6:54 PM, Lin Ma wrote: > Hi Kai, > > Smart answer! :-) > > The assumption you have is one distributed cache replica could only serve > one download session for tasktracker node (this is why you get concurrency > n/r). The question is, why one distributed cache replica cannot serve > multiple concurrent download session? For example, supposing a tasktracker > use elapsed time t to download a file from a specific distributed cache > replica, it is possible for 2 tasktrackers to download from the specific > distributed cache replica in parallel using elapsed time t as well, or 1.5 > t, which is faster than sequential download time 2t you mentioned before? > "In total, r+n/r concurrent operations. If you optimize r depending on n, > SRQT(n) is the optimal replication level." -- how do you get SRQT(n) for > minimize r+n/r? Appreciate if you could point me to more details. > > regards, > Lin > > > On Sat, Dec 22, 2012 at 8:51 PM, Kai Voigt wrote: >> >> Hi, >> >> simple math. Assuming you have n TaskTrackers in your cluster that will >> need to access the files in the distributed cache. And r is the replication >> level of those files. >> >> Copying the files into HDFS requires r copy operations over the network. >> The n TaskTrackers need to get their local copies from HDFS, so the n >> TaskTrackers copy from r DataNodes, so n/r concurrent operation. In total, >> r+n/r concurrent operations. If you optimize r depending on n, SRQT(n) is >> the optimal replication level. So 10 is a reasonable default setting for >> most clusters that are not 500+ nodes big. >> >> Kai >> >> Am 22.12.2012 um 13:46 schrieb Lin Ma : >> >> Thanks Kai, using higher replication count for the purpose of? >> >> regards, >> Lin >> >> On Sat, Dec 22, 2012 at 8:44 PM, Kai Voigt wrote: >>> >>> Hi, >>> >>> Am 22.12.2012 um 13:03 schrieb Lin Ma : >>> >>> > I want to confirm when on each task node either mapper or reducer >>> > access distributed cache file, it resides on disk, not resides in memory. >>> > Just want to make sure distributed cache file does not fully loaded into >>> > memory which compete memory consumption with mapper/reducer tasks. Is that >>> > correct? >>> >>> >>> Yes, you are correct. The JobTracker will put files for the distributed >>> cache into HDFS with a higher replication count (10 by default). Whenever a >>> TaskTracker needs those files for a task it is launching locally, it will >>> fetch a copy to its local disk. So it won't need to do this again for future >>> tasks on this node. After a job is done, all local copies and the HDFS >>> copies of files in the distributed cache are cleaned up. >>> >>> Kai >>> >>> -- >>> Kai Voigt >>> k@123.org >>> >>> >>> >>> >> >> >> -- >> Kai Voigt >> k@123.org >> >> >> >> > -- Harsh J