Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E437A10A72 for ; Tue, 17 Sep 2013 12:26:44 +0000 (UTC) Received: (qmail 46129 invoked by uid 500); 17 Sep 2013 12:26:34 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 45983 invoked by uid 500); 17 Sep 2013 12:26:32 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 45971 invoked by uid 99); 17 Sep 2013 12:26:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Sep 2013 12:26:27 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of visioner.sadak@gmail.com designates 209.85.212.50 as permitted sender) Received: from [209.85.212.50] (HELO mail-vb0-f50.google.com) (209.85.212.50) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Sep 2013 12:26:21 +0000 Received: by mail-vb0-f50.google.com with SMTP id x14so4067428vbb.9 for ; Tue, 17 Sep 2013 05:26:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=+tXkVWpYwKKGs3ADcYtf1uuh9P8xGtG5BAlMADiGA5I=; b=XLiBsuB5580ecAKDVn1K8r5q11Fc4EAHjwIBlhcZ+NTRiZ1q7+ONDLpUT540JY4zvy qCTqhC19QJ6A05sTjl7t6Fh7jKIZ24SzJio13aGUxYwUJOAtGECVWTVp3vCHifzu3p4g cqvvrmJnrxmjWsx8BSOuiwS+zAPqlyckvSlZ74FvfezFRxwMzTfsVje+QSULoRD3kZjf BdjdBhceFFTgdWEcFxlfAEMLLkRDYRdOYVi8lxz2lC4keX9YYHJ+Ux59ZkzV5EvG8o4J hysf4zeybtK2+O1USysOoWGQrbnl0MttPMsH+A3sMkwKi4tZ+r185a97JMiJ0W/nJSEM b86Q== MIME-Version: 1.0 X-Received: by 10.58.190.34 with SMTP id gn2mr101424vec.34.1379420760434; Tue, 17 Sep 2013 05:26:00 -0700 (PDT) Received: by 10.52.188.104 with HTTP; Tue, 17 Sep 2013 05:26:00 -0700 (PDT) In-Reply-To: References: Date: Tue, 17 Sep 2013 17:56:00 +0530 Message-ID: Subject: Re: Can you help me to install HDFS Federation and test? From: Visioner Sadak To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0158bee86bf1af04e6936cf1 X-Virus-Checked: Checked by ClamAV on apache.org --089e0158bee86bf1af04e6936cf1 Content-Type: text/plain; charset=ISO-8859-1 1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines 3.> check which datanode is not available check log file of that machine are both the machines able to do a passwordless ssh with each other 4.> check your etc/hosts file make sure all your node machines ip is mentioned there 5.> make sure you have datanode folder created as mentioned in config file...... let me know if u have any problem...... On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L wrote: > Hi, > > I tried to install HDFS federation with the help of document given by you. > > I have small issue. > I used 2 slave in setup, both will act as namenode and datanode. > Now the issue is when I am looking at home pages of both namenodes only > one datanode is appearing. > As per my understanding 2 datanodes should appear in both namenodes home > pages. > > Can you please let me if am missing any thing? > > Thanks, > Sandeep. > > > ------------------------------ > Date: Wed, 11 Sep 2013 15:34:38 +0530 > Subject: Re: Can you help me to install HDFS Federation and test? > From: visioner.sadak@gmail.com > To: user@hadoop.apache.org > > > may be this can help you .... > > > On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun wrote: > > Hello~ I am Rho working in korea > > I am trying to install HDFS Federation( with 2.1.0 beta version ) and to > test > After 2.1.0 ( for federation test ) install I have a big trouble when > file putting test > > > I command to hadoop > Can you help me to install HDFS Federation and test? > ./bin/hadoop fs -put test.txt /NN1/ > > there is error message > "put: Renames across FileSystems not supported" > > But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/ is ok > > Why this is happen? This is very sad to me ^^ > Can you explain why this is happend and give me solution? > > > Additionally > > Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access > to own Namespace( named NN2 ) > When making directory in namenode1 server > ./bin/hadoop fs -mkdir /NN1/nn1_org is ok but ./bin/hadoop fs -mkdir > /NN2/nn1_org is error > > Error message is "/NN2/nn1_org': No such file or directory" > > I think this is very right > > But in namenode2 server > ./bin/hadoop fs -mkdir /NN1/nn2_org is ok but ./bin/hadoop fs -mkdir > /NN2/nn2_org is error > Error message is "mkdir: `/NN2/nn2_org': No such file or directory" > > I think when making directory in NN1 is error and making directory in NN2 > is ok > > Why this is happen and can you give solution? > > > --089e0158bee86bf1af04e6936cf1 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
1.> make sure to check hadoop logs once u start u r dat= anode at /home/hadoop/hadoop-version(your)/logs=A0
2.> make sure all= the datanodes are mentioned in slaves file and slaves file is placed on al= l machines
3.> check which datanode is not available check log file of that ma= chine are both the =A0machines able to do a passwordless
ssh with= each other
4.> check your etc/hosts file make sure all your n= ode machines ip is mentioned there
5.> make sure you have datanode folder created as mentioned in conf= ig file......

let me know if u have any problem...= ...


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sandeepvreddy@outlook.= com> wrote:
Hi,

I tried to install HDFS federa= tion with the help of document given by you.

I hav= e small issue.
I used 2 slave in setup, both will act as namenode= and datanode.
Now the issue is when I am looking at home pages of both namenodes onl= y one datanode is appearing.
As per my understanding 2 datanodes = should appear in both namenodes home pages.

Can yo= u please let me if am missing any thing?

Thanks,
Sandeep.



Date: Wed, 11 Sep 2013 15:34= :38 +0530
Subject: Re: Can you help me to install HDFS Federation and te= st?
From: = visioner.sadak@gmail.com
To: user@hadoop= .apache.org


may be this = can help you ....


On Wed, Sep 11, 2013 at 3:07 PM, O= h Seok Keun <ohsg74@gmail.com> wrote:
Hello~ I am Rho working in korea

I a= m trying to install HDFS Federation( with 2.1.0 beta version ) and to test<= /div>
After 2.1.0 ( for federation =A0test ) install I have a big trouble wh= en file putting test


I command to hadoop
Can you he= lp me to install HDFS Federation and test?
./bin/hadoop fs -put t= est.txt /NN1/

there is error message
"put: Renames across FileSystems not supported"

But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/ =A0is ok=

Why this is happen? This is very sad to me ^^
Can you explain why this is happend and give me solution?

Additionally

Namenode1= is access to own Namespace( named NN1 ) and Namenode2 is access to own Nam= espace( named NN2 )
When making directory in namenode1 server=A0
./bin/hadoop fs= -mkdir /NN1/nn1_org =A0is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org =A0 i= s error

Error message is "/NN2/nn1_org': = No such file or directory"

I think this is very right

But= in namenode2 server
./bin/hadoop fs -mkdir /NN1/nn2_org =A0is ok= but ./bin/hadoop fs -mkdir /NN2/nn2_org is error
Error message i= s "mkdir: `/NN2/nn2_org': No such file or directory"

I think when making directory in NN1 is error and makin= g directory in NN2 is ok
=A0=A0
Why this is happen and = can you give solution?=A0


--089e0158bee86bf1af04e6936cf1--