Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9D04DDF53 for ; Fri, 7 Dec 2012 14:07:13 +0000 (UTC) Received: (qmail 66673 invoked by uid 500); 7 Dec 2012 14:07:09 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 66486 invoked by uid 500); 7 Dec 2012 14:07:08 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 66457 invoked by uid 99); 7 Dec 2012 14:07:07 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Dec 2012 14:07:07 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of peter.cogan@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qa0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Dec 2012 14:07:02 +0000 Received: by mail-qa0-f48.google.com with SMTP id l8so605950qaq.14 for ; Fri, 07 Dec 2012 06:06:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=J9aRej+AamPyxtAyVe2q76WThcTh8GUoqvqnubuilQo=; b=dTsVwV9f08Z2WjxDLms/RHyLJvt8haV5XdCevpJ1g/LmYUEAjsYyrojRpYXil1qdti CBDb5zTJYwndLXqvF3up0opam8+qCP+tPJqJkdDl5CBWKm/Bs0Vi27s8h5S+Ue8z4Vxn QIA2H+0Bf3gP0C9yOFQwJ/0IQPNryhWhdKwhRqogFz/xWVA5q04daksSUEOwCZatJyXJ 7CXuaJXXDUlc/7lL67aphVXrnaTdrvcpzyJKWoi1eiIqB9olj+F6Xp1AjUG3qCt+ieX+ Kz54FUoWjktBDtNp/Dkk+zsERtSX6zNh2Bwa/vvD98GnjVvK/1f/s73C7mKfI7JB18v8 +e9A== MIME-Version: 1.0 Received: by 10.49.121.40 with SMTP id lh8mr10240435qeb.30.1354889201982; Fri, 07 Dec 2012 06:06:41 -0800 (PST) Received: by 10.49.108.234 with HTTP; Fri, 7 Dec 2012 06:06:41 -0800 (PST) In-Reply-To: References: Date: Fri, 7 Dec 2012 14:06:41 +0000 Message-ID: Subject: Re: Problem using distributed cache From: Peter Cogan To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7bdc1be4981da504d043b974 X-Virus-Checked: Checked by ClamAV on apache.org --047d7bdc1be4981da504d043b974 Content-Type: text/plain; charset=ISO-8859-1 Hi, any thoughts on this would be much appreciated thanks Peter On Thu, Dec 6, 2012 at 9:29 PM, Peter Cogan wrote: > Hi, > > It's an instance created at the start of the program like this: > > public static void main(String[] args) throws Exception { > > Configuration conf = new Configuration(); > > > Job job = new Job(conf, "wordcount"); > > > > DistributedCache.addCacheFile(new URI("/user/peter/cacheFile/testCache1"), > conf); > > > > > On Thu, Dec 6, 2012 at 5:02 PM, Harsh J wrote: > >> What is your conf object there? Is it job.getConfiguration() or an >> independent instance? >> >> On Thu, Dec 6, 2012 at 10:29 PM, Peter Cogan >> wrote: >> > Hi , >> > >> > I want to use the distributed cache to allow my mappers to access data. >> In >> > main, I'm using the command >> > >> > DistributedCache.addCacheFile(new >> URI("/user/peter/cacheFile/testCache1"), >> > conf); >> > >> > Where /user/peter/cacheFile/testCache1 is a file that exists in hdfs >> > >> > Then, my setup function looks like this: >> > >> > public void setup(Context context) throws IOException, >> InterruptedException{ >> > Configuration conf = context.getConfiguration(); >> > Path[] localFiles = DistributedCache.getLocalCacheFiles(conf); >> > //etc >> > } >> > >> > However, this localFiles array is always null. >> > >> > I was initially running on a single-host cluster for testing, but I read >> > that this will prevent the distributed cache from working. I tried with >> a >> > pseudo-distributed, but that didn't work either >> > >> > I'm using hadoop 1.0.3 >> > >> > thanks Peter >> > >> > >> >> >> >> -- >> Harsh J >> > > --047d7bdc1be4981da504d043b974 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi,

any thoughts on this would be much appreciated
=

thanks
Peter
=

On Thu, Dec 6, 2012 at 9:29 PM, Peter Co= gan <peter.cogan@gmail.com> wrote:
Hi,

It's an instance = created at the start of the program like this:

public static void main(String[] args) throws= Exception {

Configuration conf =3D new Configuration();

<= br>

Job job =3D new Job(conf, "wordcou= nt");

<= br>

<= br>

DistributedCache.addCacheFile(new URI(&= quot;/user/peter/cacheFile/testCache1"), conf);

<= br>

<= span style=3D"white-space:pre-wrap">

<= span style=3D"white-space:pre-wrap">



On Thu, Dec 6, 2012 at 5:02 PM, Harsh J <harsh@cloudera.com> wrote:
What is your conf object there? Is it job.ge= tConfiguration() or an
independent instance?

On Thu, Dec 6, 2012 at 10:29 PM, Peter Cogan <peter.cogan@gmail.com> wrote:
> Hi ,
>
> I want to use the distributed cache to allow my mappers to access data= . In
> main, I'm using the command
>
> DistributedCache.addCacheFile(new URI("/user/peter/cacheFile/test= Cache1"),
> conf);
>
> Where /user/peter/cacheFile/testCache1 is a file that exists in hdfs >
> Then, my setup function looks like this:
>
> public void setup(Context context) throws IOException, InterruptedExce= ption{
> =A0 =A0 Configuration conf =3D context.getConfiguration();
> =A0 =A0 Path[] localFiles =3D DistributedCache.getLocalCacheFiles(conf= );
> =A0 =A0 //etc
> }
>
> However, this localFiles array is always null.
>
> I was initially running on a single-host cluster for testing, but I re= ad
> that this will prevent the distributed cache from working. I tried wit= h a
> pseudo-distributed, but that didn't work either
>
> I'm using hadoop 1.0.3
>
> thanks Peter
>
>



--
Harsh J


--047d7bdc1be4981da504d043b974--