Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5C36110BC3 for ; Tue, 23 Jul 2013 17:21:57 +0000 (UTC) Received: (qmail 62752 invoked by uid 500); 23 Jul 2013 17:21:51 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 62432 invoked by uid 500); 23 Jul 2013 17:21:51 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 62424 invoked by uid 99); 23 Jul 2013 17:21:50 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2013 17:21:50 +0000 X-ASF-Spam-Status: No, hits=2.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_FRT_FOLLOW1,T_FRT_FOLLOW2 X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jeetuyadav200890@gmail.com designates 209.85.217.173 as permitted sender) Received: from [209.85.217.173] (HELO mail-lb0-f173.google.com) (209.85.217.173) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2013 17:21:46 +0000 Received: by mail-lb0-f173.google.com with SMTP id v1so6506183lbd.32 for ; Tue, 23 Jul 2013 10:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=1y0yJ3NUEdJZHqGlIorM5X4L+oeXroTIarKl/WPxdiA=; b=iYtX01rLpJTo/i2RZ2/RPlfjxYTYi2ocl6yue1K+MX7Unv5cGIyq0AlKLzz+AcVvlu BcfbEn57+EBwbMHREQTLkb6Lf8Ct/IIPGiR09n0MLRpDY7lrX5r5VW3fv/jv+rilUPLM Vu6qE2wLxzsQK4XFF9mWYfiXmEOT94EAmuaDc7IjUwvkxob2GgFm90HiZbyM71OhQT2f h0M7SbAJSpFk2qHzQJGpxPN/K96zdgwHZ5Jyv3x7tcAeoylK/s3GHYZJ3My9ZWHN2mT7 fYDtKzvEowdOinnx0+EiMSOLyIsgzKxT7jvJQDt9k1iJZJTXuYzlPbtqxRl5Ba2pN/jZ ZC8Q== MIME-Version: 1.0 X-Received: by 10.152.27.9 with SMTP id p9mr15383176lag.4.1374600084855; Tue, 23 Jul 2013 10:21:24 -0700 (PDT) Received: by 10.112.201.132 with HTTP; Tue, 23 Jul 2013 10:21:24 -0700 (PDT) In-Reply-To: References: Date: Tue, 23 Jul 2013 22:51:24 +0530 Message-ID: Subject: Re: New hadoop 1.2 single node installation giving problems From: Jitendra Yadav To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0160a3acc41f7904e23105a7 X-Virus-Checked: Checked by ClamAV on apache.org --089e0160a3acc41f7904e23105a7 Content-Type: text/plain; charset=ISO-8859-1 Try.. *hadoop fs -ls /* ** Thanks On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani wrote: > Thanks Jitendra, Bejoy and Yexi, > > I got past that. And now the ls command says it can not access the > directory. I am sure this is a permissions issue. I am just wondering > which directory and I missing permissions on. > > Any pointers? > > And once again, thanks a lot > > Regards > ashish > > *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ > hadoop fs -ls* > *Warning: $HADOOP_HOME is deprecated.* > * > * > *ls: Cannot access .: No such file or directory.* > > > > On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav < > jeetuyadav200890@gmail.com> wrote: > >> Hi Ashish, >> >> Please check in hdfs-site.xml. >> >> It is missing. >> >> Thanks. >> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani wrote: >> >>> Hey thanks for response. I have changed 4 files during installation >>> >>> core-site.xml >>> mapred-site.xml >>> hdfs-site.xml and >>> hadoop-env.sh >>> >>> >>> I could not find any issues except that all params in the hadoop-env.sh >>> are commented out. Only java_home is un commented. >>> >>> If you have a quick minute can you please browse through these files in >>> email and let me know where could be the issue. >>> >>> Regards >>> ashish >>> >>> >>> >>> I am listing those files below. >>> *core-site.xml * >>> >>> >>> >>> >>> >>> >>> >>> hadoop.tmp.dir >>> /app/hadoop/tmp >>> A base for other temporary directories. >>> >>> >>> >>> fs.default.name >>> hdfs://localhost:54310 >>> The name of the default file system. A URI whose >>> scheme and authority determine the FileSystem implementation. The >>> uri's scheme determines the config property (fs.SCHEME.impl) naming >>> the FileSystem implementation class. The uri's authority is used to >>> determine the host, port, etc. for a filesystem. >>> >>> >>> >>> >>> >>> *mapred-site.xml* >>> >>> >>> >>> >>> >>> >>> >>> mapred.job.tracker >>> localhost:54311 >>> The host and port that the MapReduce job tracker runs >>> at. If "local", then jobs are run in-process as a single map >>> and reduce task. >>> >>> >>> >>> >>> >>> >>> *hdfs-site.xml and* >>> >>> >>> >>> >>> >>> >>> dfs.replication >>> 1 >>> Default block replication. >>> The actual number of replications can be specified when the file is >>> created. >>> The default is used if replication is not specified in create time. >>> >>> >>> >>> >>> >>> *hadoop-env.sh* >>> # Set Hadoop-specific environment variables here. >>> >>> # The only required environment variable is JAVA_HOME. All others are >>> # optional. When running a distributed configuration it is best to >>> # set JAVA_HOME in this file, so that it is correctly defined on >>> # remote nodes. >>> >>> # The java implementation to use. Required. >>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25 >>> >>> # Extra Java CLASSPATH elements. Optional. >>> # export HADOOP_CLASSPATH= >>> >>> >>> All pther params in hadoop-env.sh are commented >>> >>> >>> >>> >>> >>> >>> >>> >>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav < >>> jeetuyadav200890@gmail.com> wrote: >>> >>>> Hi, >>>> >>>> You might have missed some configuration (XML tags ), Please check all >>>> the Conf files. >>>> >>>> Thanks >>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani >>> > wrote: >>>> >>>>> Hi There, >>>>> >>>>> First of all, sorry if I am asking some stupid question. Myself being >>>>> new to the Hadoop environment , am finding it a bit difficult to figure out >>>>> why its failing >>>>> >>>>> I have installed hadoop 1.2, based on instructions given in the >>>>> folllowing link >>>>> >>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ >>>>> >>>>> All went well and I could do the start-all.sh and the jps command does >>>>> show all 5 process to be present. >>>>> >>>>> However when I try to do >>>>> >>>>> hadoop fs -ls >>>>> >>>>> I get the following error >>>>> >>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>>>> hadoop fs -ls >>>>> Warning: $HADOOP_HOME is deprecated. >>>>> >>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>>>> >>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>>>> >>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>>>> >>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>>>> >>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>>>> >>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>>>> >>>>> ls: Cannot access .: No such file or directory. >>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>>>> >>>>> >>>>> >>>>> Can someone help me figure out whats the issue in my installation >>>>> >>>>> >>>>> Regards >>>>> ashish >>>>> >>>> >>>> >>> >> > --089e0160a3acc41f7904e23105a7 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Try..
=A0
hadoop fs -ls /

=A0

Thanks


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani = <ashish.umrani@gmail.com> wrote:
Thanks Jitendra, Bejoy and Yexi,=20

I got past that. =A0And now the ls command says it can not access the = directory. =A0I am sure this is a permissions issue. =A0I am just wondering= which directory and I missing permissions on.

Any pointers?

And once again, thanks a lot

Regards
ashish

hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ h= adoop fs -ls
Warning: $HADOOP_HOME is deprecated.

ls: Cannot access .: No such file or directory.



On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav = <jeetuyadav200890@gmail.com> wrote:
Hi Ashish,
=A0
Please check <property></property>=A0 in hdfs-site.xml.
=A0
It is missing.
=A0
Thanks.
On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <= span dir=3D"ltr"><ashish.umrani@gmail.com> wrote:
Hey thanks for response. =A0I have changed 4 files during = installation=20

core-site.xml=A0
mapred-site.xml
hdfs-site.xml =A0 and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.s= h are commented out. =A0Only java_home is un commented.

If you have a quick minute can you please browse through these files i= n email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
core-site.xml=A0
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
=A0 <property>
=A0 =A0 <name>hadoop.tmp.dir</name>
=A0 =A0 <value>/app/hadoop/tmp</value>
=A0 =A0 <description>A base for other temporary directories.<= /description>
=A0 </property>

=A0 <property>
=A0 =A0 <name>fs.default.name</name>
=A0 =A0 <value>hdfs://localhost:54310</value>
=A0 =A0 <description>The name of the default file system. =A0A U= RI whose
=A0 =A0 scheme and authority determine the FileSystem implementation. = =A0The
=A0 =A0 uri's scheme determines the config property (fs.SCHEME.imp= l) naming
=A0 =A0 the FileSystem implementation class. =A0The uri's authorit= y is used to
=A0 =A0 determine the host, port, etc. for a filesystem.</descripti= on>
=A0 </property>
</configuration>



mapred-site.xml
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
=A0 <property>
=A0 =A0 <name>mapred.job.tracker</name>
=A0 =A0 <value>localhost:54311</value>
=A0 =A0 <description>The host and port that the MapReduce job tr= acker runs
=A0 =A0 at. =A0If "local", then jobs are run in-process as a= single map
=A0 =A0 and reduce task.
=A0 =A0 </description>
=A0 </property>
</configuration>



hdfs-site.xml =A0 and
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
=A0 <name>dfs.replication</name>
=A0 <value>1</value>
=A0 <description>Default block replication.
=A0 =A0 The actual number of replications can be specified when the fi= le is created.
=A0 =A0 The default is used if replication is not specified in create = time.
=A0 </description>
</configuration>



hadoop-env.sh
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME. =A0All others a= re
# optional. =A0When running a distributed configuration it is best to<= /div>
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use. =A0Required.
export JAVA_HOME=3D/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements. =A0Optional.
# export HADOOP_CLASSPATH=3D


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav = <jeetuyadav200890@gmail.com> wrote:
Hi,
=A0
You might have missed some configuration (XML tags ), Please check all= the Conf files.
=A0
Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashis= h.umrani@gmail.com> wrote:
Hi There,=20

First of all, sorry if I am asking some stupid question. =A0Myself bei= ng new to the Hadoop environment , am finding it a bit difficult to figure = out why its failing

I have installed hadoop 1.2, based on instructions given in the folllo= wing link
http://www.michael-noll.c= om/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

All went well and I could do the start-all.sh and the jps command does= show all 5 process to be present.

However when I try to do

hadoop fs -ls

I get the following error

hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hado= op fs -ls
Warning: $HADOOP_HOME is deprecated.

13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
ls: Cannot access .: No such file or directory.
hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$=A0



Can someone help me figure out whats the issue in my installation


Regards
ashish





--089e0160a3acc41f7904e23105a7--