Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 193BD1022E for ; Tue, 23 Jul 2013 19:44:27 +0000 (UTC) Received: (qmail 66284 invoked by uid 500); 23 Jul 2013 19:44:21 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 65871 invoked by uid 500); 23 Jul 2013 19:44:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 65862 invoked by uid 99); 23 Jul 2013 19:44:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2013 19:44:14 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_FRT_FOLLOW1,T_FRT_FOLLOW2 X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ashish.umrani@gmail.com designates 74.125.82.45 as permitted sender) Received: from [74.125.82.45] (HELO mail-wg0-f45.google.com) (74.125.82.45) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2013 19:44:07 +0000 Received: by mail-wg0-f45.google.com with SMTP id x12so4031662wgg.0 for ; Tue, 23 Jul 2013 12:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=30NFsFmshP1u6x2KiF4JTG9vENbfU/0iQE6HX/Fgb4k=; b=yhTVZjUJq9CLMrr1HCNbfILEi9qppTvb9lBFpndZK83+0KUjkYUy2ixP3p0NfW+uKd Cmykha+Y1Io2pPh+Q+sLhwickEkTqkdwZfMaCZkjwuRJOakt3573qwvV++dQXpcvdlQx svlYuAKY6y9VUSsTZ6gEoiZs3Cum1sDJi8l0zaizVAC+rp1rklj7021pDzDfCSyYnU12 DpM0fc+FVjDgTcPL/0Cs9sgFkODO/GZy4nQnoau3L9ePeo6g6XzOKJtV8RaM5L5kfDvJ uQ9gfEOAk0iOR3W1crzUiYOMfs3d6Ye7/raox0HK247BRBvjlQnrzlAkOTDy9iF4T2iu WibA== MIME-Version: 1.0 X-Received: by 10.194.109.10 with SMTP id ho10mr23899493wjb.72.1374608627736; Tue, 23 Jul 2013 12:43:47 -0700 (PDT) Received: by 10.194.158.230 with HTTP; Tue, 23 Jul 2013 12:43:47 -0700 (PDT) In-Reply-To: References: Date: Tue, 23 Jul 2013 12:43:47 -0700 Message-ID: Subject: Re: New hadoop 1.2 single node installation giving problems From: Ashish Umrani To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7bf1985af625b204e23302dd X-Virus-Checked: Checked by ClamAV on apache.org --047d7bf1985af625b204e23302dd Content-Type: text/plain; charset=ISO-8859-1 Thanks Shekhar, The problem was not in my building of the jar. It was in fact in execution I was running command *hadoop -jar* input output The problem was with -jar. It should be *hadoop jar* input output Thanks for help once again regards ashish On Tue, Jul 23, 2013 at 10:31 AM, Shekhar Sharma wrote: > hadoop jar wc.jar inputdata outputdestination > > > Regards, > Som Shekhar Sharma > +91-8197243810 > > > On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani wrote: > >> Jitendra, Som, >> >> Thanks. Issue was in not having any file there. Its working fine now. >> >> I am able to do -ls and could also do -mkdir and -put. >> >> Now is time to run the jar and apparently I am getting >> >> no main manifest attribute, in wc.jar >> >> >> But I believe its because of maven pom file does not have the main class >> entry. >> >> Which I go ahead and change the pom file and build it again, please let >> me know if you guys think of some other reason. >> >> Once again this user group rocks. I have never seen this quick a >> response. >> >> Regards >> ashish >> >> >> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav < >> jeetuyadav200890@gmail.com> wrote: >> >>> Try.. >>> >>> *hadoop fs -ls /* >>> >>> ** >>> Thanks >>> >>> >>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani >> > wrote: >>> >>>> Thanks Jitendra, Bejoy and Yexi, >>>> >>>> I got past that. And now the ls command says it can not access the >>>> directory. I am sure this is a permissions issue. I am just wondering >>>> which directory and I missing permissions on. >>>> >>>> Any pointers? >>>> >>>> And once again, thanks a lot >>>> >>>> Regards >>>> ashish >>>> >>>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>>> hadoop fs -ls* >>>> *Warning: $HADOOP_HOME is deprecated.* >>>> * >>>> * >>>> *ls: Cannot access .: No such file or directory.* >>>> >>>> >>>> >>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav < >>>> jeetuyadav200890@gmail.com> wrote: >>>> >>>>> Hi Ashish, >>>>> >>>>> Please check in hdfs-site.xml. >>>>> >>>>> It is missing. >>>>> >>>>> Thanks. >>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani < >>>>> ashish.umrani@gmail.com> wrote: >>>>> >>>>>> Hey thanks for response. I have changed 4 files during installation >>>>>> >>>>>> core-site.xml >>>>>> mapred-site.xml >>>>>> hdfs-site.xml and >>>>>> hadoop-env.sh >>>>>> >>>>>> >>>>>> I could not find any issues except that all params in the >>>>>> hadoop-env.sh are commented out. Only java_home is un commented. >>>>>> >>>>>> If you have a quick minute can you please browse through these files >>>>>> in email and let me know where could be the issue. >>>>>> >>>>>> Regards >>>>>> ashish >>>>>> >>>>>> >>>>>> >>>>>> I am listing those files below. >>>>>> *core-site.xml * >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> hadoop.tmp.dir >>>>>> /app/hadoop/tmp >>>>>> A base for other temporary directories. >>>>>> >>>>>> >>>>>> >>>>>> fs.default.name >>>>>> hdfs://localhost:54310 >>>>>> The name of the default file system. A URI whose >>>>>> scheme and authority determine the FileSystem implementation. The >>>>>> uri's scheme determines the config property (fs.SCHEME.impl) >>>>>> naming >>>>>> the FileSystem implementation class. The uri's authority is used >>>>>> to >>>>>> determine the host, port, etc. for a filesystem. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *mapred-site.xml* >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> mapred.job.tracker >>>>>> localhost:54311 >>>>>> The host and port that the MapReduce job tracker runs >>>>>> at. If "local", then jobs are run in-process as a single map >>>>>> and reduce task. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *hdfs-site.xml and* >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> dfs.replication >>>>>> 1 >>>>>> Default block replication. >>>>>> The actual number of replications can be specified when the file >>>>>> is created. >>>>>> The default is used if replication is not specified in create >>>>>> time. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *hadoop-env.sh* >>>>>> # Set Hadoop-specific environment variables here. >>>>>> >>>>>> # The only required environment variable is JAVA_HOME. All others are >>>>>> # optional. When running a distributed configuration it is best to >>>>>> # set JAVA_HOME in this file, so that it is correctly defined on >>>>>> # remote nodes. >>>>>> >>>>>> # The java implementation to use. Required. >>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25 >>>>>> >>>>>> # Extra Java CLASSPATH elements. Optional. >>>>>> # export HADOOP_CLASSPATH= >>>>>> >>>>>> >>>>>> All pther params in hadoop-env.sh are commented >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav < >>>>>> jeetuyadav200890@gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> You might have missed some configuration (XML tags ), Please check >>>>>>> all the Conf files. >>>>>>> >>>>>>> Thanks >>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani < >>>>>>> ashish.umrani@gmail.com> wrote: >>>>>>> >>>>>>>> Hi There, >>>>>>>> >>>>>>>> First of all, sorry if I am asking some stupid question. Myself >>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to >>>>>>>> figure out why its failing >>>>>>>> >>>>>>>> I have installed hadoop 1.2, based on instructions given in the >>>>>>>> folllowing link >>>>>>>> >>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ >>>>>>>> >>>>>>>> All went well and I could do the start-all.sh and the jps command >>>>>>>> does show all 5 process to be present. >>>>>>>> >>>>>>>> However when I try to do >>>>>>>> >>>>>>>> hadoop fs -ls >>>>>>>> >>>>>>>> I get the following error >>>>>>>> >>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>>>>>>> hadoop fs -ls >>>>>>>> Warning: $HADOOP_HOME is deprecated. >>>>>>>> >>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element >>>>>>>> not >>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element >>>>>>>> not >>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element >>>>>>>> not >>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element >>>>>>>> not >>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element >>>>>>>> not >>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element >>>>>>>> not >>>>>>>> ls: Cannot access .: No such file or directory. >>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Can someone help me figure out whats the issue in my installation >>>>>>>> >>>>>>>> >>>>>>>> Regards >>>>>>>> ashish >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > --047d7bf1985af625b204e23302dd Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thanks Shekhar,

The problem was not in = my building of the jar. =A0It was in fact in execution

=
I was running command

hadoop -jar <= jar filename> <qualified class name> input output

The problem was with -jar. =A0It should be
hadoop jar <jar filename> <qualified clas= s name> input output


Thank= s for help once again

regards
ashish


On Tue, Jul 23, 2013 at 10:31 AM= , Shekhar Sharma <shekhar2581@gmail.com> wrote:
hadoop jar wc.jar <fully qualified d= river name> inputdata outputdestination


Regards,
Som Shekhar Sharma


On Tue, Jul 23, 2013 at 10:58 PM, Ashish= Umrani <ashish.umrani@gmail.com> wrote:
Jitendra, Som,

Thanks. =A0Issue was in = not having any file there. =A0Its working fine now. =A0

I am able to do -ls and could also do -mkdir and -put.

=
Now is time to run the jar and apparently I am getting=A0
no main manifest attribute, in wc.jar

=

But I believe its because of maven pom file does not ha= ve the main class entry.

Which I go ahead and change the pom file and build it a= gain, please let me know if you guys think of some other reason.
=
Once again this user group rocks. =A0I have never seen this = quick a response.

Regards
ashish


On Tue, Jul 23, 2013 at 10:21 AM= , Jitendra Yadav <jeetuyadav200890@gmail.com> wrote= :
Try..
=A0
hadoop fs -ls /

=A0

Thanks


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani = <ashish.umrani@gmail.com> wrote:
Thanks Jitendra, Bejoy and Yexi,=20

I got past that. =A0And now the ls command says it can not access the = directory. =A0I am sure this is a permissions issue. =A0I am just wondering= which directory and I missing permissions on.

Any pointers?

And once again, thanks a lot

Regards
ashish

hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ h= adoop fs -ls
Warning: $HADOOP_HOME is deprecated.

ls: Cannot access .: No such file or directory.



On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav = <jeetuyadav200890@gmail.com> wrote:
Hi Ashish,
=A0
Please check <property></property>=A0 in hdfs-site.xml.
=A0
It is missing.
=A0
Thanks.
On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <= span dir=3D"ltr"><ashish.umrani@gmail.com> wrote:
Hey thanks for response. =A0I have changed 4 files during = installation=20

core-site.xml=A0
mapred-site.xml
hdfs-site.xml =A0 and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.s= h are commented out. =A0Only java_home is un commented.

If you have a quick minute can you please browse through these files i= n email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
core-site.xml=A0
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
=A0 <property>
=A0 =A0 <name>hadoop.tmp.dir</name>
=A0 =A0 <value>/app/hadoop/tmp</value>
=A0 =A0 <description>A base for other temporary directories.<= /description>
=A0 </property>

=A0 <property>
=A0 =A0 <name>fs.default.name</name>
=A0 =A0 <value>hdfs://localhost:54310</value>
=A0 =A0 <description>The name of the default file system. =A0A U= RI whose
=A0 =A0 scheme and authority determine the FileSystem implementation. = =A0The
=A0 =A0 uri's scheme determines the config property (fs.SCHEME.imp= l) naming
=A0 =A0 the FileSystem implementation class. =A0The uri's authorit= y is used to
=A0 =A0 determine the host, port, etc. for a filesystem.</descripti= on>
=A0 </property>
</configuration>



mapred-site.xml
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
=A0 <property>
=A0 =A0 <name>mapred.job.tracker</name>
=A0 =A0 <value>localhost:54311</value>
=A0 =A0 <description>The host and port that the MapReduce job tr= acker runs
=A0 =A0 at. =A0If "local", then jobs are run in-process as a= single map
=A0 =A0 and reduce task.
=A0 =A0 </description>
=A0 </property>
</configuration>



hdfs-site.xml =A0 and
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
=A0 <name>dfs.replication</name>
=A0 <value>1</value>
=A0 <description>Default block replication.
=A0 =A0 The actual number of replications can be specified when the fi= le is created.
=A0 =A0 The default is used if replication is not specified in create = time.
=A0 </description>
</configuration>



hadoop-env.sh
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME. =A0All others a= re
# optional. =A0When running a distributed configuration it is best to<= /div>
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use. =A0Required.
export JAVA_HOME=3D/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements. =A0Optional.
# export HADOOP_CLASSPATH=3D


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav = <jeetuyadav200890@gmail.com> wrote:
Hi,
=A0
You might have missed some configuration (XML tags ), Please check all= the Conf files.
=A0
Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashis= h.umrani@gmail.com> wrote:
Hi There,=20

First of all, sorry if I am asking some stupid question. =A0Myself bei= ng new to the Hadoop environment , am finding it a bit difficult to figure = out why its failing

I have installed hadoop 1.2, based on instructions given in the folllo= wing link
http://www.michael-noll.c= om/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

All went well and I could do the start-all.sh and the jps command does= show all 5 process to be present.

However when I try to do

hadoop fs -ls

I get the following error

hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hado= op fs -ls
Warning: $HADOOP_HOME is deprecated.

13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
ls: Cannot access .: No such file or directory.
hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$=A0



Can someone help me figure out whats the issue in my installation


Regards
ashish








--047d7bf1985af625b204e23302dd--