Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BA253D7A7 for ; Fri, 19 Oct 2012 12:50:06 +0000 (UTC) Received: (qmail 27560 invoked by uid 500); 19 Oct 2012 12:50:01 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 27167 invoked by uid 500); 19 Oct 2012 12:50:00 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 27140 invoked by uid 99); 19 Oct 2012 12:49:59 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Oct 2012 12:49:59 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of cordioli.alberto@gmail.com designates 209.85.220.176 as permitted sender) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Oct 2012 12:49:52 +0000 Received: by mail-vc0-f176.google.com with SMTP id gb22so427447vcb.35 for ; Fri, 19 Oct 2012 05:49:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=+uMmyHTmglj3+DU2j4350ZyMYXW6HtYALd+jjO0tz1U=; b=aS20R31ebGtGRm6GDFaSI/wwvDe8L4XLUzE8J1tJCe192L09G/640AIVqIZGknw0dB eDI5MkfZLy/rLbQLVuEQbaPiXZ+hLtv1O1DyVgVlggauWKgtXedUIr33Ca/njR5cOKU5 zS78mFdVMRgsZ7FOsDKeKXB3cK1cdgHNAD6kikeva2aZKdxrTNMe5mDM6HYFq93oZBzl sAMdrJ/AWZFyv/KZ6SDm7jFYYa47Gdk1Xe8X9WIixItfHnLhCzR5Nvx25TSRZmwQF6QU qH/73SWE/50iNThliPpOvk+wmjhjCWKzlsPSSbSPAMDbXM2QAXkO1vbgFecHKThk5Rr3 ebxw== Received: by 10.52.34.42 with SMTP id w10mr1164117vdi.10.1350650971537; Fri, 19 Oct 2012 05:49:31 -0700 (PDT) MIME-Version: 1.0 Received: by 10.220.34.199 with HTTP; Fri, 19 Oct 2012 05:49:16 -0700 (PDT) In-Reply-To: References: From: Alberto Cordioli Date: Fri, 19 Oct 2012 14:49:16 +0200 Message-ID: Subject: Re: DistributedCache: getLocalCacheFiles() always null To: user@hadoop.apache.org Content-Type: text/plain; charset=UTF-8 X-Virus-Checked: Checked by ClamAV on apache.org Ok, it was my fault. Instead of using getConf() when I added a new cache file I should use job.getConfiguration() Not it works. Cheers, Alberto On 19 October 2012 09:19, Alberto Cordioli wrote: > Hi all, > > I am trying to use the DistributedCache with the new Hadoop API. > According to the documentation it seems that nothing change, and the > use is the same as with the old api. > However I am facing some problems. This is the snippet in which I use it: > > > // setting input/output format classes > .... > > //DISTRIBUTED CACHE > DistributedCache.addCacheFile(new > Path("/cdr/input/cgi.csv#cgi.csv").toUri(), getConf()); > job.waitForCompletion(true); > > > and in my reducer: > > @Override > protected void setup(Context context) throws IOException{ > Path[] localFiles = > DistributedCache.getLocalCacheFiles(context.getConfiguration()); > .... > } > > localFiels is always null. I read that the getLocalCacheFiles() should > be used in configure() method, but the mapper/reducer of the new api > do not have that method. > What's wrong? > I read that the DistributedCache has some troubles if you try to run > your program from a client (e.g., inside an IDE), but I tried also to > run it directly on the cluster. > > > Thanks. > > -- > Alberto Cordioli -- Alberto Cordioli