Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2AA6117739 for ; Wed, 8 Apr 2015 19:20:23 +0000 (UTC) Received: (qmail 52718 invoked by uid 500); 8 Apr 2015 19:20:16 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 52611 invoked by uid 500); 8 Apr 2015 19:20:16 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 52600 invoked by uid 99); 8 Apr 2015 19:20:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Apr 2015 19:20:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sandeepvura@gmail.com designates 209.85.213.170 as permitted sender) Received: from [209.85.213.170] (HELO mail-ig0-f170.google.com) (209.85.213.170) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Apr 2015 19:20:11 +0000 Received: by igblo3 with SMTP id lo3so47198969igb.0 for ; Wed, 08 Apr 2015 12:18:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=kpWuysUs0MhLe/BBOMY3BjNdqLkik7Nen/e8qBwzr3s=; b=XyjG8FUhnrt1HHFh6kFnWRlbNHkjzD9DIwhHmSE4xVzwFLMDQaWX8og7+TWMMzMXUx esd4Jh0Q7efgqybdglF1zXt1Yces2DdyIt1pyZLnZ7b7jmgkNgrbGP+vFalPusTK0CsM x9TAVaVqnDu3cWoXJFtfOhKJ6QsSI96tApjV6gN7EdzKtYW7rqnEQepl6RGpQPA8KZCa qe/DL+qXET+z1sNCx7L2MDfyVXkAlvwIemhz3nndImpPy3pTLLuw41+EVy+nnuP+fXMh DqvSyv8kWUJoYMOgbihkqVdUkLZtyWAfDe7gYlJJ2dHWUBQ9RhHKN6A37QR9xGbAgHGE mZJg== MIME-Version: 1.0 X-Received: by 10.50.23.105 with SMTP id l9mr9230741igf.13.1428520701331; Wed, 08 Apr 2015 12:18:21 -0700 (PDT) Received: by 10.64.235.178 with HTTP; Wed, 8 Apr 2015 12:18:21 -0700 (PDT) In-Reply-To: <98B1C94BB1043E45A039EE6EDF5F1FA11F617E01@ctspigdcapmxs33.cihs.ad.gov.on.ca> References: <98B1C94BB1043E45A039EE6EDF5F1FA11F617DB5@ctspigdcapmxs33.cihs.ad.gov.on.ca> <98B1C94BB1043E45A039EE6EDF5F1FA11F617DD9@ctspigdcapmxs33.cihs.ad.gov.on.ca> <98B1C94BB1043E45A039EE6EDF5F1FA11F617E01@ctspigdcapmxs33.cihs.ad.gov.on.ca> Date: Thu, 9 Apr 2015 00:48:21 +0530 Message-ID: Subject: Re: Unable to load file from local to HDFS cluster From: sandeep vura To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e01538e6cf5135905133b64c1 X-Virus-Checked: Checked by ClamAV on apache.org --089e01538e6cf5135905133b64c1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Exactly but every time it picks randomly. Our datanodes are 192.168.2.81,192.168.2.82,192.168.2.83,192.168.2.84,192.168.2.85 Namenode : 192.168.2.80 If i restarts the cluster next time it will show 192.168.2.81:50010 connection closed On Thu, Apr 9, 2015 at 12:28 AM, Liaw, Huat (MTO) wrote: > You can not start 192.168.2.84:50010=E2=80=A6. closed by ((192.168.2.x > -datanode)) > > > > *From:* sandeep vura [mailto:sandeepvura@gmail.com] > *Sent:* April 8, 2015 2:39 PM > > *To:* user@hadoop.apache.org > *Subject:* Re: Unable to load file from local to HDFS cluster > > > > We are using this setup from a very long time.We are able to run all the > jobs successfully but suddenly went wrong with namenode. > > > > On Thu, Apr 9, 2015 at 12:06 AM, sandeep vura > wrote: > > I have also noticed another issue when starting hadoop cluster > start-all.sh command > > > > namenode and datanode daemons are starting.But sometimes one of the > datanode would drop the connection and it shows the message connection > closed by ((192.168.2.x -datanode)) everytime when it restart the hadoop > cluster datanode will keeps changing . > > > > for example 1st time when i starts hadoop cluster - 192.168.2.1 - > connection closed > > 2nd time when i starts hadoop cluster - 192.168.2.2-connection closed > .This point again 192.168.2.1 will starts successfuly without any errors. > > > > I couldn't able to figure out the issue exactly.Is issue relates to > network or Hadoop configuration. > > > > > > > > On Wed, Apr 8, 2015 at 11:54 PM, Liaw, Huat (MTO) > wrote: > > hadoop fs -put Copy from remote location to HDFS > > > > *From:* sandeep vura [mailto:sandeepvura@gmail.com] > *Sent:* April 8, 2015 2:24 PM > *To:* user@hadoop.apache.org > *Subject:* Re: Unable to load file from local to HDFS cluster > > > > Sorry Liaw,I tried same command but its didn't resolve. > > > > Regards, > > Sandeep.V > > > > On Wed, Apr 8, 2015 at 11:37 PM, Liaw, Huat (MTO) > wrote: > > Should be hadoop dfs -put > > > > *From:* sandeep vura [mailto:sandeepvura@gmail.com] > *Sent:* April 8, 2015 1:53 PM > *To:* user@hadoop.apache.org > *Subject:* Unable to load file from local to HDFS cluster > > > > Hi, > > > > When loading a file from local to HDFS cluster using the below command > > > > hadoop fs -put sales.txt /sales_dept. > > > > Getting the following exception.Please let me know how to resolve this > issue asap.Please find the attached is the logs that is displaying on > namenode. > > > > Regards, > > Sandeep.v > > > > > > > --089e01538e6cf5135905133b64c1 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Exactly but every time it picks randomly. Our datanodes ar= e 192.168.2.81,192.168.2.82,192.168.2.83,192.168.2.84,192.168.2.85

=
Namenode =C2=A0: 192.168.2.80

If i rest= arts the cluster next time it will show 192.168.2.81:50010 connection closed=C2=A0

On Thu, Apr 9, 2015 at 12:28 AM= , Liaw, Huat (MTO) <Huat.Liaw@ontario.ca> wrote:

You can not start 192.168= .2.84:50010= =E2=80=A6. closed by ((192.168.2.x -datanode))

=C2=A0

From: sandeep vura [mailto:sandeepvura@gmail.com]
Sent: April 8, 2015 2:39 PM


To: user= @hadoop.apache.org
Subject: Re: Unable to load file from local to HDFS cluster

=C2=A0

We are using this setup from a very long time.We are= able to run all the jobs successfully but suddenly went wrong with namenod= e.

=C2=A0

On Thu, Apr 9, 2015 at 12:06 AM, sandeep vura <sandeepvura@gmail.= com> wrote:

I have also noticed another issue when starting hado= op cluster start-all.sh command=C2=A0

=C2=A0

namenode and datanode daemons are starting.But somet= imes one of the datanode would drop the connection and it shows the message= connection closed by ((192.168.2.x -datanode)) everytime when it restart t= he hadoop cluster datanode will keeps changing .

=C2=A0

for example 1st time when i starts hadoop cluster - = 192.168.2.1 - connection closed=C2=A0

2nd time when i starts hadoop cluster - 192.168.2.2-= connection closed .This point again 192.168.2.1 will starts successfuly wit= hout any errors.

=C2=A0

I couldn't able to figure out the issue exactly.= Is issue relates to network or Hadoop configuration.

=C2=A0

=C2=A0

=C2=A0

On Wed, Apr 8, 2015 at 11:54 PM, Liaw, Huat (MTO) &l= t;Huat.Liaw@ontar= io.ca> wrote:

hadoop fs -put <source= > <destination> Copy from remote location to HDFS

=C2=A0

From: sandeep vura [mailto:sa= ndeepvura@gmail.com]
Sent: April 8, 2015 2:24 PM
To: user= @hadoop.apache.org
Subject: Re: Unable to load file from local to HDFS cluster

=C2=A0

Sorry Liaw,I tried same command but its didn't r= esolve.

=C2=A0

Regards,

Sandeep.V

=C2=A0

On Wed, Apr 8, 2015 at 11:37 PM, Liaw, Huat (MTO) &l= t;Huat.Liaw@ontar= io.ca> wrote:

Should be hadoop dfs -put=

=C2=A0

From: sandeep vura [mailto:sa= ndeepvura@gmail.com]
Sent: April 8, 2015 1:53 PM
To: user= @hadoop.apache.org
Subject: Unable to load file from local to HDFS cluster

=C2=A0

Hi,

=C2=A0

When loading a file from local to HDFS cluster using= the below command=C2=A0

=C2=A0

hadoop fs -put sales.txt /sales_dept.<= /p>

=C2=A0

Getting the following exception.Please let me know h= ow to resolve this issue asap.Please find the attached is the logs that is = displaying on namenode.

=C2=A0

Regards,

Sandeep.v

=C2=A0

=C2=A0

=C2=A0


--089e01538e6cf5135905133b64c1--