Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2CF9610C4A for ; Sat, 21 Sep 2013 18:17:34 +0000 (UTC) Received: (qmail 66643 invoked by uid 500); 21 Sep 2013 18:17:24 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 66496 invoked by uid 500); 21 Sep 2013 18:17:16 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 66489 invoked by uid 99); 21 Sep 2013 18:17:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Sep 2013 18:17:14 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of visioner.sadak@gmail.com designates 209.85.212.45 as permitted sender) Received: from [209.85.212.45] (HELO mail-vb0-f45.google.com) (209.85.212.45) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Sep 2013 18:17:08 +0000 Received: by mail-vb0-f45.google.com with SMTP id e15so1267309vbg.32 for ; Sat, 21 Sep 2013 11:16:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=DLLE8xP+KE7Vzj8GfxzgJO8dKIo+P8wBbIiuEzLhm2k=; b=K5y6aTYUXIlnzbk+dWLDPiBnl05usGhpKCWc5hKtwSTRz3LsbYgu1wtcQ2mk/NCppu cSAsoboCLZWbhDN0PumjEUU1mwcBBTe42LjZCtbNOA8lF+J+IuwZh4YL4HGvybO35N1t r3H7LzA4DXLOuRk6L6GECHd5Ke2wdt2vS66jiHreLbPTpXLALJtu/T+JHT4aCzR5yRJV p10+DpeG4NOrJKp5SulbB1aoIu5YkHiX3IPU71NClbWzmJBIWVaZv8wIlZl+Tl8ceB+b A6OSL0NSRTIOXcIJT81RZBOMbbSVuzuypTf8H47avDRhpQYVpNfLx2bmtYxA4L7plHA6 uigg== MIME-Version: 1.0 X-Received: by 10.58.136.4 with SMTP id pw4mr12909011veb.10.1379787407130; Sat, 21 Sep 2013 11:16:47 -0700 (PDT) Received: by 10.52.188.104 with HTTP; Sat, 21 Sep 2013 11:16:47 -0700 (PDT) In-Reply-To: References: Date: Sat, 21 Sep 2013 23:46:47 +0530 Message-ID: Subject: Re: Can you help me to install HDFS Federation and test? From: Visioner Sadak To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b5dba684488ff04e6e8ca72 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b5dba684488ff04e6e8ca72 Content-Type: text/plain; charset=ISO-8859-1 sryy for late reply just checked my mail today are you using client side mount table just as mentioned in the doc which u reffered if u r using client side mount table configurations in u r core-site.xml u wont be able to create directory in that case first create folder without client side-mountable configurations then once folders are created u can again include client side-mountable configurations and restart namenode,datanode and all daemons..by the way which version u r trying to install On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L wrote: > No its not appearing from other name node. > > Here is the procedure I followed: > In NameNode1 I ran following commands > bin/hdfs dfs -mkdir test > bin/hdfs dfs -put dummy.txt test > > When ran bin/hdfs -ls test command from NameNode1 its listing file fin > hdfs but if I ran same command from NameNode2 out put is "ls: test : No > such file or directory" > > > Thanks, > Sandeep. > > > ------------------------------ > Date: Wed, 18 Sep 2013 16:58:50 +0530 > > Subject: Re: Can you help me to install HDFS Federation and test? > From: visioner.sadak@gmail.com > To: user@hadoop.apache.org > > It shud be visible from every namenode machine have you tried this commmand > > bin/hdfs dfs -ls /yourdirectoryname/ > > > On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L wrote: > > Hi, > > I resolved the issue. > There is some problem with /etc/hosts file. > > One more question I would like to ask is: > > I created a directory in HDFS of NameNode1 and copied a file into it. My > question is did it visible when I ran *hadoop fs -ls *from > NameNode2 machine? > For me its not visible, can you explain with bit detailed. > > Thanks, > Sandeep. > > > ------------------------------ > Date: Tue, 17 Sep 2013 17:56:00 +0530 > > Subject: Re: Can you help me to install HDFS Federation and test? > From: visioner.sadak@gmail.com > To: user@hadoop.apache.org > > 1.> make sure to check hadoop logs once u start u r datanode at > /home/hadoop/hadoop-version(your)/logs > 2.> make sure all the datanodes are mentioned in slaves file and slaves > file is placed on all machines > 3.> check which datanode is not available check log file of that machine > are both the machines able to do a passwordless > ssh with each other > 4.> check your etc/hosts file make sure all your node machines ip is > mentioned there > 5.> make sure you have datanode folder created as mentioned in config > file...... > > let me know if u have any problem...... > > > On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L wrote: > > Hi, > > I tried to install HDFS federation with the help of document given by you. > > I have small issue. > I used 2 slave in setup, both will act as namenode and datanode. > Now the issue is when I am looking at home pages of both namenodes only > one datanode is appearing. > As per my understanding 2 datanodes should appear in both namenodes home > pages. > > Can you please let me if am missing any thing? > > Thanks, > Sandeep. > > > ------------------------------ > Date: Wed, 11 Sep 2013 15:34:38 +0530 > Subject: Re: Can you help me to install HDFS Federation and test? > From: visioner.sadak@gmail.com > To: user@hadoop.apache.org > > > may be this can help you .... > > > On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun wrote: > > Hello~ I am Rho working in korea > > I am trying to install HDFS Federation( with 2.1.0 beta version ) and to > test > After 2.1.0 ( for federation test ) install I have a big trouble when > file putting test > > > I command to hadoop > Can you help me to install HDFS Federation and test? > ./bin/hadoop fs -put test.txt /NN1/ > > there is error message > "put: Renames across FileSystems not supported" > > But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/ is ok > > Why this is happen? This is very sad to me ^^ > Can you explain why this is happend and give me solution? > > > Additionally > > Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access > to own Namespace( named NN2 ) > When making directory in namenode1 server > ./bin/hadoop fs -mkdir /NN1/nn1_org is ok but ./bin/hadoop fs -mkdir > /NN2/nn1_org is error > > Error message is "/NN2/nn1_org': No such file or directory" > > I think this is very right > > But in namenode2 server > ./bin/hadoop fs -mkdir /NN1/nn2_org is ok but ./bin/hadoop fs -mkdir > /NN2/nn2_org is error > Error message is "mkdir: `/NN2/nn2_org': No such file or directory" > > I think when making directory in NN1 is error and making directory in NN2 > is ok > > Why this is happen and can you give solution? > > > > > --047d7b5dba684488ff04e6e8ca72 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
sryy for late reply just checked my mail today are you usi= ng client side mount table just as mentioned in the doc which u reffered = =A0if u r using client side mount table configurations in u r core-site.xml= u wont be able to create directory in that case first create folder withou= t client side-mountable configurations then once folders are created u can = again include=A0client side-mountable configurations =A0and restart namenod= e,datanode and all daemons..by the way which version u r trying to install<= /div>


On Thu, Sep 1= 9, 2013 at 12:00 PM, Sandeep L <sandeepvreddy@outlook.com><= /span> wrote:
No its not appearing from other name node.

Here is the procedure I followed:
In NameNode1 I ran fol= lowing commands
bin/hdfs dfs -mkdir test
bin/hdfs dfs -= put dummy.txt test

When ran bin/hdfs -ls test command from NameNode1 its l= isting file fin hdfs but if I ran same command from NameNode2 out put is &q= uot;ls: test : No such file or directory"


Thanks, Sandeep.



Date: Wed, 18 Sep 2013 16:58:50 +0530

Subject: Re: Can you help me to install HDFS Federation= and test?
From: visioner.sadak@gmail.com
To: user@hadoop= .apache.org

It shud be visible from every namen= ode machine have you tried this commmand

=A0bin/hdfs dfs= -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sandeepvreddy@outlook.com> wrote:
Hi,

I resolved the issue.
There is some problem with /etc/hosts file.

One = more question I would like to ask is:

I created a = directory in HDFS of NameNode1 and copied a file into it. My question is di= d it visible when I ran hadoop fs -ls <PathToDirectory> from N= ameNode2 machine?
For me its not visible, can you explain with bit detailed.

Than= ks,
Sandeep.



Date: Tue, 17 Sep 2013 17:56:00 +053= 0

Subject: Re: Can you help me to install HDFS Federation and = test?
From: visione= r.sadak@gmail.com
To: user@hadoop.apache.org

1.> mak= e sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoo= p-version(your)/logs=A0
2.> make sure all the datanodes are mentioned in slaves file and slaves = file is placed on all machines
3.> check which datanode is not available check log file of that ma= chine are both the =A0machines able to do a passwordless
ssh with= each other
4.> check your etc/hosts file make sure all your n= ode machines ip is mentioned there
5.> make sure you have datanode folder created as mentioned in conf= ig file......

let me know if u have any problem...= ...


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sandeepvreddy@outlook.= com> wrote:
Hi,

I tried to install HDFS federa= tion with the help of document given by you.

I hav= e small issue.
I used 2 slave in setup, both will act as namenode= and datanode.
Now the issue is when I am looking at home pages of both namenodes onl= y one datanode is appearing.
As per my understanding 2 datanodes = should appear in both namenodes home pages.

Can yo= u please let me if am missing any thing?

Thanks,
Sandeep.



Date: Wed, 11 Sep 2013 15:34= :38 +0530
Subject: Re: Can you help me to install HDFS Federation and te= st?
From: = visioner.sadak@gmail.com
To: user@hadoop= .apache.org


may be this can help you = ....


On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <= span dir=3D"ltr"><= ohsg74@gmail.com> wrote:
Hello~ I am Rho working in korea

I a= m trying to install HDFS Federation( with 2.1.0 beta version ) and to test<= /div>
After 2.1.0 ( for federation =A0test ) install I have a big trouble wh= en file putting test


I command to hadoop
Can you he= lp me to install HDFS Federation and test?
./bin/hadoop fs -put t= est.txt /NN1/

there is error message
"put: Renames across FileSystems not supported"

But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/ =A0is ok=

Why this is happen? This is very sad to me ^^
Can you explain why this is happend and give me solution?

Additionally

Namenode1= is access to own Namespace( named NN1 ) and Namenode2 is access to own Nam= espace( named NN2 )
When making directory in namenode1 server=A0
./bin/hadoop fs= -mkdir /NN1/nn1_org =A0is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org =A0 i= s error

Error message is "/NN2/nn1_org': = No such file or directory"

I think this is very right

But= in namenode2 server
./bin/hadoop fs -mkdir /NN1/nn2_org =A0is ok= but ./bin/hadoop fs -mkdir /NN2/nn2_org is error
Error message i= s "mkdir: `/NN2/nn2_org': No such file or directory"

I think when making directory in NN1 is error and makin= g directory in NN2 is ok
=A0=A0
Why this is happen and = can you give solution?=A0




--047d7b5dba684488ff04e6e8ca72--