Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9FFD7CC84 for ; Mon, 23 Sep 2013 05:27:14 +0000 (UTC) Received: (qmail 94364 invoked by uid 500); 23 Sep 2013 05:27:02 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 94264 invoked by uid 500); 23 Sep 2013 05:27:01 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 94244 invoked by uid 99); 23 Sep 2013 05:26:59 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 23 Sep 2013 05:26:59 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sandeepvreddy@outlook.com designates 65.55.111.76 as permitted sender) Received: from [65.55.111.76] (HELO blu0-omc2-s1.blu0.hotmail.com) (65.55.111.76) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 23 Sep 2013 05:26:54 +0000 Received: from BLU171-W5 ([65.55.111.73]) by blu0-omc2-s1.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Sun, 22 Sep 2013 22:26:33 -0700 X-TMN: [hYhCzE9NS10MvhXF1s/kaPisLMp3jA56] X-Originating-Email: [sandeepvreddy@outlook.com] Message-ID: Content-Type: multipart/alternative; boundary="_7f321382-b4c0-4193-b31c-aa4be02b5f98_" From: Sandeep L To: "user@hadoop.apache.org" Subject: RE: Can you help me to install HDFS Federation and test? Date: Mon, 23 Sep 2013 10:56:33 +0530 Importance: Normal In-Reply-To: References: ,,,,,,, MIME-Version: 1.0 X-OriginalArrivalTime: 23 Sep 2013 05:26:33.0390 (UTC) FILETIME=[7735CCE0:01CEB81D] X-Virus-Checked: Checked by ClamAV on apache.org --_7f321382-b4c0-4193-b31c-aa4be02b5f98_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi=2C Except "hadoop.tmp.dir" I am not defined any thing in core-site.xmlCan you = please let me what exactly I should include in core-site.xml Thanks=2CSandeep. Date: Sat=2C 21 Sep 2013 23:46:47 +0530 Subject: Re: Can you help me to install HDFS Federation and test? From: visioner.sadak@gmail.com To: user@hadoop.apache.org sryy for late reply just checked my mail today are you using client side mo= unt table just as mentioned in the doc which u reffered if u r using clien= t side mount table configurations in u r core-site.xml u wont be able to cr= eate directory in that case first create folder without client side-mountab= le configurations then once folders are created u can again include client = side-mountable configurations and restart namenode=2Cdatanode and all daem= ons..by the way which version u r trying to install=0A= On Thu=2C Sep 19=2C 2013 at 12:00 PM=2C Sandeep L wrote: =0A= =0A= =0A= =0A= No its not appearing from other name node. Here is the procedure I followed:In NameNode1 I ran following commandsbin/h= dfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test=0A= When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs= but if I ran same command from NameNode2 out put is "ls: test : No such fi= le or directory" Thanks=2C=0A= Sandeep. Date: Wed=2C 18 Sep 2013 16:58:50 +0530 Subject: Re: Can you help me to install HDFS Federation and test? From: visioner.sadak@gmail.com =0A= To: user@hadoop.apache.org It shud be visible from every namenode machine have you tried this commmand bin/hdfs dfs -ls /yourdirectoryname/=0A= On Wed=2C Sep 18=2C 2013 at 9:23 AM=2C Sandeep L wrote: =0A= =0A= =0A= =0A= Hi=2C I resolved the issue.There is some problem with /etc/hosts file. One more question I would like to ask is: I created a directory in HDFS of NameNode1 and copied a file into it. My qu= estion is did it visible when I ran hadoop fs -ls from Na= meNode2 machine?=0A= =0A= For me its not visible=2C can you explain with bit detailed. Thanks=2CSandeep. Date: Tue=2C 17 Sep 2013 17:56:00 +0530 Subject: Re: Can you help me to install HDFS Federation and test? =0A= =0A= From: visioner.sadak@gmail.com To: user@hadoop.apache.org 1.> make sure to check hadoop logs once u start u r datanode at /home/hadoo= p/hadoop-version(your)/logs =0A= =0A= 2.> make sure all the datanodes are mentioned in slaves file and slaves fil= e is placed on all machines=0A= 3.> check which datanode is not available check log file of that machine ar= e both the machines able to do a passwordlessssh with each other4.> check = your etc/hosts file make sure all your node machines ip is mentioned there= =0A= =0A= =0A= 5.> make sure you have datanode folder created as mentioned in config file.= ..... let me know if u have any problem...... =0A= On Tue=2C Sep 17=2C 2013 at 2:44 PM=2C Sandeep L wrote: =0A= =0A= =0A= =0A= =0A= =0A= Hi=2C I tried to install HDFS federation with the help of document given by you. I have small issue.I used 2 slave in setup=2C both will act as namenode and= datanode.=0A= =0A= =0A= Now the issue is when I am looking at home pages of both namenodes only one= datanode is appearing.As per my understanding 2 datanodes should appear in= both namenodes home pages. Can you please let me if am missing any thing? =0A= =0A= =0A= Thanks=2CSandeep. Date: Wed=2C 11 Sep 2013 15:34:38 +0530 Subject: Re: Can you help me to install HDFS Federation and test? From: visioner.sadak@gmail.com =0A= =0A= =0A= To: user@hadoop.apache.org may be this can help you .... On Wed=2C Sep 11=2C 2013 at 3:07 PM=2C Oh Seok Keun wrot= e: =0A= =0A= =0A= =0A= Hello~ I am Rho working in korea I am trying to install HDFS Federation( with 2.1.0 beta version ) and to te= st=0A= =0A= =0A= =0A= After 2.1.0 ( for federation test ) install I have a big trouble when file= putting test=0A= I command to hadoopCan you help me to install HDFS Federation and test?./bi= n/hadoop fs -put test.txt /NN1/ there is error message=0A= =0A= =0A= =0A= =0A= "put: Renames across FileSystems not supported" But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/ is ok Why this is happen? This is very sad to me ^^=0A= =0A= =0A= =0A= =0A= Can you explain why this is happend and give me solution? Additionally Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access t= o own Namespace( named NN2 )=0A= =0A= =0A= =0A= =0A= When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_o= rg is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org is error Error message is "/NN2/nn1_org': No such file or directory"=0A= =0A= =0A= =0A= =0A= I think this is very right But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org is ok but ./bin= /hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_o= rg': No such file or directory"=0A= =0A= =0A= =0A= =0A= I think when making directory in NN1 is error and making directory in NN2 i= s ok Why this is happen and can you give solution? =0A= =0A= =0A= =0A= = --_7f321382-b4c0-4193-b31c-aa4be02b5f98_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
Hi=2C

Except = "hadoop.tmp.dir" I am not defined any thing in core-site.xml
Can you please let me what exactly I should include in core-site.xml

Thanks=2C
Sandeep.



Date: Sat=2C 21 Sep 2013 23:46:47 +0530
Subject: Re: Can you help me to= install HDFS Federation and test?
From: visioner.sadak@gmail.com
To:= user@hadoop.apache.org

sryy for late reply just ch= ecked my mail today are you using client side mount table just as mentioned= in the doc which u reffered  =3Bif u r using client side mount table c= onfigurations in u r core-site.xml u wont be able to create directory in th= at case first create folder without client side-mountable configurations th= en once folders are created u can again include =3Bclient side-mountabl= e configurations  =3Band restart namenode=2Cdatanode and all daemons..b= y the way which version u r trying to install
=0A=


On Thu= =2C Sep 19=2C 2013 at 12:00 PM=2C Sandeep L <=3Bsandeepvreddy@outlo= ok.com>=3B wrote:
=0A=
=0A= =0A= =0A=
No its not appearing from other name node.

Here is the procedure I followed:
In NameNode1 I ran fol= lowing commands
bin/hdfs dfs -mkdir test
bin/hdfs dfs -= put dummy.txt test
=0A=

When ran bin/hdfs -ls test command from NameNode1 its l= isting file fin hdfs but if I ran same command from NameNode2 out put is "l= s: test : No such file or directory"


Thanks=2C
=0A= Sandeep.



Date: Wed=2C 18 Sep 2013 16:58:50 +0530
=

Subject: Re: Can you help me to install HDFS Federati= on and test?
From: visioner.sadak@gmail.com
=0A= To: user@hadoop= .apache.org

It shud be visible from every namen= ode machine have you tried this commmand

 =3Bbin/hdf= s dfs -ls /yourdirectoryname/
=0A=


On Wed=2C Sep 18=2C 2013 at 9:23 AM=2C Sandeep L <=3Bsandeepvreddy@outlook.com>=3B wrote:
=0A=
=0A= =0A= =0A=
Hi=2C

I resolved the issue.
<= div>There is some problem with /etc/hosts file.

On= e more question I would like to ask is:

I created = a directory in HDFS of NameNode1 and copied a file into it. My question is = did it visible when I ran hadoop fs -ls <=3BPathToDirectory>=3B = from NameNode2 machine?
=0A= =0A=
For me its not visible=2C can you explain with bit detailed.

Th= anks=2C
Sandeep.



Date: Tue=2C 17 Sep 2013 17:56:0= 0 +0530

Subject: Re: Can you help me to install HDFS Federatio= n and test?
=0A= =0A= From: visione= r.sadak@gmail.com
To: user@hadoop.apache.org

1.>=3B m= ake sure to check hadoop logs once u start u r datanode at /home/hadoop/had= oop-version(your)/logs =3B
=0A= =0A= 2.>=3B make sure all the datanodes are mentioned in slaves file and slave= s file is placed on all machines
=0A=
3.>=3B check which datanode is not available check log file of that = machine are both the  =3Bmachines able to do a passwordless
s= sh with each other
4.>=3B check your etc/hosts file make sure a= ll your node machines ip is mentioned there
=0A= =0A= =0A=
5.>=3B make sure you have datanode folder created as mentioned in co= nfig file......

let me know if u have any problem.= .....


=0A= On Tue=2C Sep 17=2C 2013 at 2:44 PM=2C Sandeep L <=3Bsandeepvreddy@= outlook.com>=3B wrote:
=0A= =0A= =0A= =0A= =0A= =0A=
Hi=2C

I tried to install HDFS fede= ration with the help of document given by you.

I h= ave small issue.
I used 2 slave in setup=2C both will act as name= node and datanode.
=0A= =0A= =0A=
Now the issue is when I am looking at home pages of both namenodes onl= y one datanode is appearing.
As per my understanding 2 datanodes = should appear in both namenodes home pages.

Can yo= u please let me if am missing any thing?
=0A= =0A= =0A=
Thanks=2C
Sandeep.



Date: Wed=2C 11 Sep 2013 1= 5:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation an= d test?
From: visioner.sadak@gmail.com
=0A= =0A= =0A= To: user@hadoop= .apache.org


may be this can help you = ....


On Wed=2C Sep 11=2C 2013 at 3:07 PM=2C Oh Seok = Keun <=3Bohsg74@gmail.com>=3B wrote:
=0A= =0A= =0A= =0A=
Hello~ I am Rho working in korea

I am trying to install HDFS Federation( with 2.1.0 beta version ) and to t= est
=0A= =0A= =0A= =0A=
After 2.1.0 ( for federation  =3Btest ) install I have a big troub= le when file putting test
=0A=


I command to hadoop
Can you he= lp me to install HDFS Federation and test?
./bin/hadoop fs -put t= est.txt /NN1/

there is error message
=0A= =0A= =0A= =0A= =0A= "put: Renames across FileSystems not supported"

Bu= t ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  =3Bis ok

Why this is happen? This is very sad to me ^^
=0A= =0A= =0A= =0A= =0A=
Can you explain why this is happend and give me solution?

Additionally

Namenode1= is access to own Namespace( named NN1 ) and Namenode2 is access to own Nam= espace( named NN2 )
=0A= =0A= =0A= =0A= =0A=
When making directory in namenode1 server =3B
./bin/hado= op fs -mkdir /NN1/nn1_org  =3Bis ok but ./bin/hadoop fs -mkdir /NN2/nn1= _org  =3B is error

Error message is "/NN2/nn1_= org': No such file or directory"
=0A= =0A= =0A= =0A= =0A=

I think this is very right

But= in namenode2 server
./bin/hadoop fs -mkdir /NN1/nn2_org  =3B= is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is error
Error mess= age is "mkdir: `/NN2/nn2_org': No such file or directory"
=0A= =0A= =0A= =0A= =0A=

I think when making directory in NN1 is error and makin= g directory in NN2 is ok
 =3B =3B
Why this is h= appen and can you give solution? =3B
=0A=

=0A=

=0A=

=0A=
= --_7f321382-b4c0-4193-b31c-aa4be02b5f98_--