Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6E61E10DB9 for ; Tue, 25 Feb 2014 19:41:35 +0000 (UTC) Received: (qmail 21484 invoked by uid 500); 25 Feb 2014 19:41:13 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 21286 invoked by uid 500); 25 Feb 2014 19:41:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 21104 invoked by uid 99); 25 Feb 2014 19:41:09 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Feb 2014 19:41:09 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dontariq@gmail.com designates 209.85.128.172 as permitted sender) Received: from [209.85.128.172] (HELO mail-ve0-f172.google.com) (209.85.128.172) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Feb 2014 19:41:05 +0000 Received: by mail-ve0-f172.google.com with SMTP id jx11so1038492veb.3 for ; Tue, 25 Feb 2014 11:40:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=jobJSUzmC3wKcbbD8dsj//gqNaJlXR/sM3nQ/BXrdYY=; b=nABTBKa808P4IHTq2frUhYthr71INd5cxufIAnQFhqozV4KrFEnyICSNo17qbn4yko N4HmprXoqVUteXyUj/MiuaCTXUs9cgDkuw2B4wxqmfhR0C+r9oYzyflXQ1gNiVSVC1YN Nd+NnxeJI4OAufJlMSH05zqCWoG2bXOEEi9sO7Rw70Ky7llTS8b0olJ1t48cRXwKqWW/ kRGnBF5541GekRopMTVabs5+WGOCeRGJEZJFY+1iu/vt5eNqkQc2TVIaxyWkzXPsomkG CNEayiV7JatAa2BNGMYITiyPZ/vM+W7Bc3jBcu1UTp0Bkq/XLh57Hi34ZZbcz/ELRPPs fZeA== X-Received: by 10.52.81.66 with SMTP id y2mr1964153vdx.23.1393357244975; Tue, 25 Feb 2014 11:40:44 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.88.70 with HTTP; Tue, 25 Feb 2014 11:40:04 -0800 (PST) In-Reply-To: <5DF48A23D7B14649BBA72C2F64C6663B82B4CA33@szxeml523-mbx.china.huawei.com> References: <6ED92CB759CBB24EAF3F30FC49C774941B703F@ESESSMB105.ericsson.se> <5DF48A23D7B14649BBA72C2F64C6663B82B4CA33@szxeml523-mbx.china.huawei.com> From: Mohammad Tariq Date: Wed, 26 Feb 2014 01:10:04 +0530 Message-ID: Subject: Re: Wrong FS hdfs:/localhost:9000 ;expected file/// To: "hdfs-user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a1136776aa1cfe704f340438c X-Virus-Checked: Checked by ClamAV on apache.org --001a1136776aa1cfe704f340438c Content-Type: text/plain; charset=ISO-8859-1 Hi Chirag, Alternatively, you can add following 2 lines in your code in order to make it work without having to worry about the classpath : conf.addResource(new Path("/path/to/core-site.xml")); conf.addResource(new Path("/path/to/hdfs-site.xml")); Warm Regards, Tariq cloudfront.blogspot.com On Tue, Feb 25, 2014 at 9:42 PM, Vinayakumar B wrote: > Hi Chirag, > > > > Hadoop expects core-site.xml to be in classpath which infact will be > present in HADOOP_CONF_DIR > > > > When you run hadoop jar test.jar , hadoop script will take care of > adding all dependencies to CLASSPATH including HADOOP_CONF_DIR and your > client will run successfully. > > > > > > When you run using *java -jar test.jar *classpath will not be set and *-jar > option of java will ignore the classpath* set either using *CLASSPATH*env variable or *-cp > argument*. That means your *test.jar *should be complete Runnable Jar > with all dependencies including *conf files.* > > > > > > Please verify by running in the following way by constructing the > CLASSPATH which includes HADOOP_CONF_DIR > > > > java -cp > > > > or > > simply use hadoop jar test.jar > > > > > > Cheers, > > Vinayakumar B > > > > *From:* Chris Mawata [mailto:chris.mawata@gmail.com] > *Sent:* 25 February 2014 20:08 > *To:* user@hadoop.apache.org > *Subject:* Re: Wrong FS hdfs:/localhost:9000 ;expected file/// > > > > The hadoop command gives you a configuration object with the > configurations that are in your XML files. In your Java code you are > probably getting your FileSystem object from a blank Configuration when you > don't use the hadoop command. > Chris > > On Feb 24, 2014 7:37 AM, "Chirag Dewan" wrote: > > Hi All, > > > > I am new to hadoop. I am using hadoop 2.2.0. I have a simple client code > which reads a file from HDFS on a single node cluster. Now when I run my > code using java -jar mytest.jar it throws the error Wrong FS > hdfs://localhost. > > > > When I run the same code with hadoop jar test.jar it works just fine. > > I have my core-site.xml with fs.default.name as hdfs://localhost > > > > Am I missing some classpath dependency here? > > > > Thanks in advance. > > > > Chirag Dewan > > > > > --001a1136776aa1cfe704f340438c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Chirag,

Alternatively, you can add f= ollowing 2 lines in your code in order to make it work without having to wo= rry about the classpath :

conf.addResource(ne= w Path("/path/to/core-site.xml"));
conf.addResource(new Path("/path/to/hdfs-site.xml"));
<= /div>

Warm Regards,
Tariq


On Tue, Feb 25, 2014 at 9:42 PM, Vinayak= umar B <vinayakumar.b@huawei.com> wrote:

Hi Chirag,

 

Hadoop expects core-site.= xml to be in classpath which infact will be present in HADOOP_CONF_DIR

 

When you run hadoop jar test.jar ,  hadoop script= will take care of adding all dependencies to CLASSPATH including HADOOP_CO= NF_DIR  and your client  will run successfully.

 

 

When you run using java -jar test.jar classpath will not be set and -jar option of j= ava will ignore the classpath set either using CLASSPATH env variable or –cp argument.  That mean= s your test.jar should be complete Runnable Jar with all dependencies including conf= files.

 

 

Please verify by running = in the following way by constructing the CLASSPATH which includes HADOOP_CO= NF_DIR

 

java –cp <CLASSP= ATH> <MAIN-CLASS> <args>

 

    or

simply use hadoop jar test.jar<= /p>

 

 

Cheers,

Vinayakumar B

 

From: Chris Ma= wata [mailto:ch= ris.mawata@gmail.com]
Sent: 25 February 2014 20:08
To: user= @hadoop.apache.org
Subject: Re: Wrong FS hdfs:/localhost:9000 ;expected file///<= u>

 

The hadoop command gives you a configuration object with the configurati= ons that are in your XML files.  In your Java code you are probably ge= tting your FileSystem object from a blank Configuration when you don't = use the hadoop command.
Chris

On Feb 24, 2014 7:37 AM, "Chirag Dewan" &l= t;chirag.dew= an@ericsson.com> wrote:

Hi All,

 

I am new to hadoop. I am using hadoop 2.2.0. I have = a simple client code which reads a file from HDFS on a single node cluster.= Now when I run my code using java –jar mytest.jar it throws the error Wrong FS hdfs://localhost.

 

When I run the same code with hadoop jar test.jar it= works just fine.

I have my core-site.xml with fs.default.name as= hdfs://localhost

 

Am I missing some classpath dependency here?<= u>

 

Thanks in advance.

 

Chirag Dewan

 

 


--001a1136776aa1cfe704f340438c--