Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 019DD10B26 for ; Tue, 23 Jul 2013 17:08:32 +0000 (UTC) Received: (qmail 1629 invoked by uid 500); 23 Jul 2013 17:08:26 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 1364 invoked by uid 500); 23 Jul 2013 17:08:26 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 1344 invoked by uid 99); 23 Jul 2013 17:08:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2013 17:08:26 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_FRT_FOLLOW1,T_FRT_FOLLOW2 X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shekhar2581@gmail.com designates 209.85.128.169 as permitted sender) Received: from [209.85.128.169] (HELO mail-ve0-f169.google.com) (209.85.128.169) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2013 17:08:18 +0000 Received: by mail-ve0-f169.google.com with SMTP id m1so6247925ves.14 for ; Tue, 23 Jul 2013 10:07:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=tzoCuhhwUoCdAvPby01MKmEHCVF/aKCMGCr4d+Ul5BA=; b=dEYzyueqLgT4AsNYJ+Ip8FdIbVCVvvaW7waNe8SeFsh3nfZDpUUIZeTtBvmjnBKfAr W1JONsmnr6UI8/p/lnr+KlkpDY19KqqEpB/09BDeCmYCUbAiMcbSVacRDJiD48fyCWGY 19vexNaYLXxJAcYWrQdh2m0zxQ9YwePKSx1SK1l2HjYH/kgWe7bcYHyP6uOk24hLsiQX LR3mjwDXUAhDccjuP2KJkXqPttbmt1fG0mxygT78BcKAC38lDv+07Ssf4xtfPTF3zrus OO5GkFQe/PrciRIGispH7OSw72dF1X00eMRfqdohYjKRS+PlgAuEYj/GClTgFc8hwtNM 4fng== MIME-Version: 1.0 X-Received: by 10.58.188.52 with SMTP id fx20mr11615614vec.47.1374599277218; Tue, 23 Jul 2013 10:07:57 -0700 (PDT) Received: by 10.220.164.16 with HTTP; Tue, 23 Jul 2013 10:07:57 -0700 (PDT) In-Reply-To: <1401102422-1374597642-cardhu_decombobulator_blackberry.rim.net-89549314-@b16.c16.bise7.blackberry> References: <1401102422-1374597642-cardhu_decombobulator_blackberry.rim.net-89549314-@b16.c16.bise7.blackberry> Date: Tue, 23 Jul 2013 22:37:57 +0530 Message-ID: Subject: Re: New hadoop 1.2 single node installation giving problems From: Shekhar Sharma To: user@hadoop.apache.org, bejoy.hadoop@gmail.com Content-Type: multipart/alternative; boundary=089e013a04f2a0909304e230d502 X-Virus-Checked: Checked by ClamAV on apache.org --089e013a04f2a0909304e230d502 Content-Type: text/plain; charset=ISO-8859-1 Its warning not error... Create a directory and then do ls ( In your case /user/hduser is not created untill and unless for the first time you create a directory or put some file) hadoop fs -mkdir sample hadoop fs -ls I would suggest if you are getting pemission problem, please check the following: (1) Have you run the command "hadoop namenode -format" with different user and you are accessing the hdfs with different user? On Tue, Jul 23, 2013 at 10:10 PM, wrote: > ** > Hi Ashish > > In your hdfs-site.xml within tag you need to have the > tag and inside a tag you can have , and > tags. > > Regards > Bejoy KS > > Sent from remote device, Please excuse typos > ------------------------------ > *From: * Ashish Umrani > *Date: *Tue, 23 Jul 2013 09:28:00 -0700 > *To: * > *ReplyTo: * user@hadoop.apache.org > *Subject: *Re: New hadoop 1.2 single node installation giving problems > > Hey thanks for response. I have changed 4 files during installation > > core-site.xml > mapred-site.xml > hdfs-site.xml and > hadoop-env.sh > > > I could not find any issues except that all params in the hadoop-env.sh > are commented out. Only java_home is un commented. > > If you have a quick minute can you please browse through these files in > email and let me know where could be the issue. > > Regards > ashish > > > > I am listing those files below. > *core-site.xml * > > > > > > > > hadoop.tmp.dir > /app/hadoop/tmp > A base for other temporary directories. > > > > fs.default.name > hdfs://localhost:54310 > The name of the default file system. A URI whose > scheme and authority determine the FileSystem implementation. The > uri's scheme determines the config property (fs.SCHEME.impl) naming > the FileSystem implementation class. The uri's authority is used to > determine the host, port, etc. for a filesystem. > > > > > > *mapred-site.xml* > > > > > > > > mapred.job.tracker > localhost:54311 > The host and port that the MapReduce job tracker runs > at. If "local", then jobs are run in-process as a single map > and reduce task. > > > > > > > *hdfs-site.xml and* > > > > > > > dfs.replication > 1 > Default block replication. > The actual number of replications can be specified when the file is > created. > The default is used if replication is not specified in create time. > > > > > > *hadoop-env.sh* > # Set Hadoop-specific environment variables here. > > # The only required environment variable is JAVA_HOME. All others are > # optional. When running a distributed configuration it is best to > # set JAVA_HOME in this file, so that it is correctly defined on > # remote nodes. > > # The java implementation to use. Required. > export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25 > > # Extra Java CLASSPATH elements. Optional. > # export HADOOP_CLASSPATH= > > > All pther params in hadoop-env.sh are commented > > > > > > > > > On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav < > jeetuyadav200890@gmail.com> wrote: > >> Hi, >> >> You might have missed some configuration (XML tags ), Please check all >> the Conf files. >> >> Thanks >> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani wrote: >> >>> Hi There, >>> >>> First of all, sorry if I am asking some stupid question. Myself being >>> new to the Hadoop environment , am finding it a bit difficult to figure out >>> why its failing >>> >>> I have installed hadoop 1.2, based on instructions given in the >>> folllowing link >>> >>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ >>> >>> All went well and I could do the start-all.sh and the jps command does >>> show all 5 process to be present. >>> >>> However when I try to do >>> >>> hadoop fs -ls >>> >>> I get the following error >>> >>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>> hadoop fs -ls >>> Warning: $HADOOP_HOME is deprecated. >>> >>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>> >>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>> >>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>> >>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>> >>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>> >>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not >>> >>> ls: Cannot access .: No such file or directory. >>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ >>> >>> >>> >>> Can someone help me figure out whats the issue in my installation >>> >>> >>> Regards >>> ashish >>> >> >> > --089e013a04f2a0909304e230d502 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Its warning not error...

Create a directory and then do = ls ( In your case /user/hduser is not created untill and unless for the fir= st time you create a directory or put some file)

hadoop fs =A0-mkdir sample

hadoop fs =A0-ls
<= div>
I would suggest if you are getting pemission problem,
please check the following:

(1) Have you r= un the command "hadoop namenode -format" with different user and = you are accessing the hdfs with different user?

On Tue, Jul 23, 2013 at 10:10 PM, <be= joy.hadoop@gmail.com> wrote:
ashish.umrani@gmail.com>
Date: Tue, 23 Jul 2013 09:28:00 -0700
Subject: Re: New hadoop 1.2 single node installation givi= ng problems

Hey thanks for response. = =A0I have changed 4 files during installation

core-site.= xml=A0
mapred-site.xml
hdfs-site.xml =A0 and
hadoop-env.s= h


I could not find any issues except that all params in the ha= doop-env.sh are commented out. =A0Only java_home is un commented.

If you have a quick minute can you please browse through th= ese files in email and let me know where could be the issue.

Regards
ashish


<= /div>

I am listing those files below.
= core-site.xml=A0
<?xml version=3D"1.0"?>=
<?xml-stylesheet type=3D"text/xsl" href=3D"configura= tion.xsl"?>

<!-- Put site-specific prop= erty overrides in this file. -->

<configurat= ion>
=A0 <property>
=A0 =A0 <name>hadoop.tmp.dir</= name>
=A0 =A0 <value>/app/hadoop/tmp</value>
=
=A0 =A0 <description>A base for other temporary directories.<= /description>
=A0 </property>

=A0 <property>
=A0 =A0 <name>fs.default.name</name>
=A0 =A0 <value>hdfs:= //localhost:54310</value>
=A0 =A0 <description>The name of the default file system. =A0A U= RI whose
=A0 =A0 scheme and authority determine the FileSystem im= plementation. =A0The
=A0 =A0 uri's scheme determines the conf= ig property (fs.SCHEME.impl) naming
=A0 =A0 the FileSystem implementation class. =A0The uri's authorit= y is used to
=A0 =A0 determine the host, port, etc. for a filesys= tem.</description>
=A0 </property>
</con= figuration>



mapred-site.xml<= /b>
<?xml version=3D"1.0"?>
<?x= ml-stylesheet type=3D"text/xsl" href=3D"configuration.xsl&qu= ot;?>

<!-- Put site-specific property overrides in this fi= le. -->

<configuration>
=A0 <= ;property>
=A0 =A0 <name>mapred.job.tracker</name>=
=A0 =A0 <value>localhost:54311</value>
=A0 =A0 &= lt;description>The host and port that the MapReduce job tracker runs
=A0 =A0 at. =A0If "local", then jobs are run in-process as= a single map
=A0 =A0 and reduce task.
=A0 =A0 </description>
<= div>=A0 </property>
</configuration>
=


hdfs-site.xml =A0 and
<?xml version=3D"1.0"?>
<?xml-stylesheet = type=3D"text/xsl" href=3D"configuration.xsl"?>
=

<!-- Put site-specific property overrides in this fi= le. -->

<configuration>
=A0 <name>dfs.rep= lication</name>
=A0 <value>1</value>
= =A0 <description>Default block replication.
=A0 =A0 The act= ual number of replications can be specified when the file is created.
=A0 =A0 The default is used if replication is not specified in create = time.
=A0 </description>
</configuration>



hadoop-env.s= h
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME. =A0A= ll others are
# optional. =A0When running a distributed configura= tion it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
=
# remote nodes.

# The java implementation to = use. =A0Required.
export JAVA_HOME=3D/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements. =A0Optional.
# export HADOOP_CLASSPATH=3D


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, = 2013 at 8:38 AM, Jitendra Yadav <jeetuyadav200890@gmail.com&g= t; wrote:
Hi,
=A0
You might have missed some configuration (XML tags ), Please check all= the Conf files.
=A0
Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashis= h.umrani@gmail.com> wrote:
Hi There,=20

First of all, sorry if I am asking some stupid question. =A0Myself bei= ng new to the Hadoop environment , am finding it a bit difficult to figure = out why its failing

I have installed hadoop 1.2, based on instructions given in the folllo= wing link
http://www.michael-noll.c= om/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

All went well and I could do the start-all.sh and the jps command does= show all 5 process to be present.

However when I try to do

hadoop fs -ls

I get the following error

hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hado= op fs -ls
Warning: $HADOOP_HOME is deprecated.

13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not = <property>
ls: Cannot access .: No such file or directory.
hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$=A0



Can someone help me figure out whats the issue in my installation


Regards
ashish



--089e013a04f2a0909304e230d502--