Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2070B10671 for ; Thu, 27 Mar 2014 16:20:59 +0000 (UTC) Received: (qmail 36767 invoked by uid 500); 27 Mar 2014 16:20:47 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 36261 invoked by uid 500); 27 Mar 2014 16:20:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 36243 invoked by uid 99); 27 Mar 2014 16:20:45 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Mar 2014 16:20:45 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=HTML_IMAGE_ONLY_28,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL,T_REMOTE_IMAGE X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of jkpoon@ucdavis.edu does not designate 209.85.214.178 as permitted sender) Received: from [209.85.214.178] (HELO mail-ob0-f178.google.com) (209.85.214.178) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Mar 2014 16:20:40 +0000 Received: by mail-ob0-f178.google.com with SMTP id wp18so4603321obc.9 for ; Thu, 27 Mar 2014 09:20:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=p9jCj/F2fGo5xh3qEZ2KxCXLuKmXkvy8IJoOoWXKeiA=; b=ks57xNOpzDqYqBUl66XSb0iNBFl+Tfjm9uPce/+0YTgYwujOQecbBOYyfnmpolbRlk 6yGgsyy5cBx/EumBsWImPFzuq+EicemXNAXtvoS2zYAFVgkwfKBLPzAUHwsxf5uObOwm GOghTL9cpy7kX3+h4KV5soYajQ9P+XTu158e2Nw98TJYvQFy2Q1hk+BZOOyrZV1zF+99 blYKZmTDBwNkhgq6yzvZsflMSHcNTmcHJDwvp8Zjv9s/b0nhSDZj/ZnX8DonZA1h7mO2 28ylT9lgTcDYi+7xZQVSBJke5BfmH1LkZs0IVITEMH7uAsh/20wFrB2Zc4q5jb1RiZBb KAXQ== X-Gm-Message-State: ALoCoQlJQSGTY3SmqQU1IPenXhJMJyKvhTWIuFfBijLlrOAnkm4FcAo66Y31bpJUyVYa8SKtQQuy MIME-Version: 1.0 X-Received: by 10.182.22.227 with SMTP id h3mr2265628obf.36.1395937217775; Thu, 27 Mar 2014 09:20:17 -0700 (PDT) Received: by 10.182.29.201 with HTTP; Thu, 27 Mar 2014 09:20:17 -0700 (PDT) In-Reply-To: References: Date: Thu, 27 Mar 2014 09:20:17 -0700 Message-ID: Subject: Re: Hadoop 2.2.0 Distributed Cache From: Jonathan Poon To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c2fd6cfea97d04f598f53f X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2fd6cfea97d04f598f53f Content-Type: text/plain; charset=ISO-8859-1 Hi Stanley, Sorry about the confusion, but I'm trying to read a txt file into my Mapper function. I am trying to copy the file using the -files option when submitting the Hadoop job. I try to obtain the filename using the following lines of code in my Mapper: URI[] localPaths = context.getCacheFiles(); String configFilename = localPaths[0].toString(); However, when I run the JAR in hadoop, I get a NullPointerException. Error: java.lang.NullPointerException I'm running Hadoop 2.2 in Single Node mode. Not sure if that affects things... On Wed, Mar 26, 2014 at 8:21 PM, Stanley Shi wrote: > where did you get the error? from the compiler or the runtime? > > Regards, > *Stanley Shi,* > > > > On Thu, Mar 27, 2014 at 7:34 AM, Jonathan Poon wrote: > >> Hi Everyone, >> >> I'm submitting a MapReduce job using the -files option to copy a text >> file that contains properties I use for the map and reduce functions. >> >> I'm trying to obtain the local cache files in my mapper function using: >> >> Path[] paths = context.getLocalCacheFiles(); >> >> However, i get an error saying getLocalCacheFiles() is undefined. I've >> imported the hadoop-mapreduce-client-core-2.2.0.jar as part of my build >> environment in Eclipse. >> >> Any ideas on what could be incorrect? >> >> If I'm incorrectly using the distributed cache, could someone point me to >> an example using the distributed cache with Hadoop 2.2.0? >> >> Thanks for your help! >> >> Jonathan >> > > --001a11c2fd6cfea97d04f598f53f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

Hi Stanley,

Sorry about = the confusion, but I'm trying to read a txt file into my Mapper functio= n. =A0I am trying to copy the file using the -files option when submitting = the Hadoop job.

I try to obtain the filename using the following lines = of code in my Mapper:

URI[] localPaths =3D co= ntext.getCacheFiles();
String configFilename =3D localPaths[0].to= String();

However, when I run the JAR in hadoop, I get a Nu= llPointerException. =A0

Error: java.lang.NullPoint= erException

I'm running Hadoop 2.2 = in Single Node mode. =A0Not sure if that affects things...


<= div class=3D"gmail_extra">

On Wed, Mar 26= , 2014 at 8:21 PM, Stanley Shi <sshi@gopivotal.com> wrote:<= br>
where did you get the error= ? from the compiler or the runtime?

Regards,
Stanley Shi,



On Thu, Mar 27, 2014 at 7:34 AM, Jonatha= n Poon <jkpoon@ucdavis.edu> wrote:
Hi Everyone,

I'm submitting a MapRe= duce job using the -files option to copy a text file that contains properti= es I use for the map and reduce functions. =A0

I&#= 39;m trying to obtain the local cache files in my mapper function using:

Path[] paths =3D context.getLocalCacheFiles();

However, i get an error saying getLocalCacheFiles() is und= efined. =A0I've imported the hadoop-mapreduce-client-core-2.2.0.jar as = part of my build environment in Eclipse. =A0

Any ideas on what could be incorrect? =A0
If I'm incorrectly using the distributed cache, could someo= ne point me to an example using the distributed cache with Hadoop 2.2.0? = =A0

Thanks for your help!

Jonathan=A0


--001a11c2fd6cfea97d04f598f53f--