Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EEBCD17407 for ; Wed, 8 Apr 2015 18:40:09 +0000 (UTC) Received: (qmail 57317 invoked by uid 500); 8 Apr 2015 18:40:05 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 57176 invoked by uid 500); 8 Apr 2015 18:40:04 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 57161 invoked by uid 99); 8 Apr 2015 18:40:04 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Apr 2015 18:40:04 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sandeepvura@gmail.com designates 209.85.213.182 as permitted sender) Received: from [209.85.213.182] (HELO mail-ig0-f182.google.com) (209.85.213.182) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Apr 2015 18:40:00 +0000 Received: by iggg4 with SMTP id g4so47637320igg.0 for ; Wed, 08 Apr 2015 11:38:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=irMQrVvOCfMdwzaHYia/lskUL/fyfwbCy4lX4GcykOQ=; b=ygQva/YVsQNiHUSApmChSmeI/TLCjng77cfgIov42lHYTJDm+5jkHKRbr+oTKPM3pa o3LBU56/coEBQ5rqqFPNr+CkHJusVQcaIlhMAJL6eduvaEM/6Y4fHhjNfT1P/gFL0W36 4PVT6uvTzbbpvO2uW38P5pSE8SCFtjdqIMEq1SWL0jPuk9Sm5bv8xP1bPcDz3NIQU8qr vZoTAYuKdHBGHEXrv49Abq26Jz6JbyTgLlXxPBpf/DhjNeQH5Bq+zHVZjeVGTzlYW99f iEmmZF5ggJkNLSvdxSQAEkRITVj230g3D6lImhXfCknmmNATGJqyiDeu0QRizMTU2I2I FjLA== MIME-Version: 1.0 X-Received: by 10.50.43.162 with SMTP id x2mr14125619igl.46.1428518335035; Wed, 08 Apr 2015 11:38:55 -0700 (PDT) Received: by 10.64.235.178 with HTTP; Wed, 8 Apr 2015 11:38:54 -0700 (PDT) In-Reply-To: References: <98B1C94BB1043E45A039EE6EDF5F1FA11F617DB5@ctspigdcapmxs33.cihs.ad.gov.on.ca> <98B1C94BB1043E45A039EE6EDF5F1FA11F617DD9@ctspigdcapmxs33.cihs.ad.gov.on.ca> Date: Thu, 9 Apr 2015 00:08:54 +0530 Message-ID: Subject: Re: Unable to load file from local to HDFS cluster From: sandeep vura To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0103e4ceea40b205133ad7f5 X-Virus-Checked: Checked by ClamAV on apache.org --089e0103e4ceea40b205133ad7f5 Content-Type: text/plain; charset=UTF-8 We are using this setup from a very long time.We are able to run all the jobs successfully but suddenly went wrong with namenode. On Thu, Apr 9, 2015 at 12:06 AM, sandeep vura wrote: > I have also noticed another issue when starting hadoop cluster > start-all.sh command > > namenode and datanode daemons are starting.But sometimes one of the > datanode would drop the connection and it shows the message connection > closed by ((192.168.2.x -datanode)) everytime when it restart the hadoop > cluster datanode will keeps changing . > > for example 1st time when i starts hadoop cluster - 192.168.2.1 - > connection closed > 2nd time when i starts hadoop cluster - 192.168.2.2-connection closed > .This point again 192.168.2.1 will starts successfuly without any errors. > > I couldn't able to figure out the issue exactly.Is issue relates to > network or Hadoop configuration. > > > > On Wed, Apr 8, 2015 at 11:54 PM, Liaw, Huat (MTO) > wrote: > >> hadoop fs -put Copy from remote location to HDFS >> >> >> >> *From:* sandeep vura [mailto:sandeepvura@gmail.com] >> *Sent:* April 8, 2015 2:24 PM >> *To:* user@hadoop.apache.org >> *Subject:* Re: Unable to load file from local to HDFS cluster >> >> >> >> Sorry Liaw,I tried same command but its didn't resolve. >> >> >> >> Regards, >> >> Sandeep.V >> >> >> >> On Wed, Apr 8, 2015 at 11:37 PM, Liaw, Huat (MTO) >> wrote: >> >> Should be hadoop dfs -put >> >> >> >> *From:* sandeep vura [mailto:sandeepvura@gmail.com] >> *Sent:* April 8, 2015 1:53 PM >> *To:* user@hadoop.apache.org >> *Subject:* Unable to load file from local to HDFS cluster >> >> >> >> Hi, >> >> >> >> When loading a file from local to HDFS cluster using the below command >> >> >> >> hadoop fs -put sales.txt /sales_dept. >> >> >> >> Getting the following exception.Please let me know how to resolve this >> issue asap.Please find the attached is the logs that is displaying on >> namenode. >> >> >> >> Regards, >> >> Sandeep.v >> >> >> > > --089e0103e4ceea40b205133ad7f5 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
We are using this setup from a very long time.We are able = to run all the jobs successfully but suddenly went wrong with namenode.

On Thu, Apr 9, = 2015 at 12:06 AM, sandeep vura <sandeepvura@gmail.com> w= rote:
I have also notice= d another issue when starting hadoop cluster start-all.sh command=C2=A0
namenode and datanode daemons are starting.But sometimes on= e of the datanode would drop the connection and it shows the message connec= tion closed by ((192.168.2.x -datanode)) everytime when it restart the hado= op cluster datanode will keeps changing .

for exam= ple 1st time when i starts hadoop cluster - 192.168.2.1 - connection closed= =C2=A0
2nd time when i starts hadoop cluster - 192.168.2.2-connec= tion closed .This point again 192.168.2.1 will starts successfuly without a= ny errors.

I couldn't able to figure out the i= ssue exactly.Is issue relates to network or Hadoop configuration.



On Wed, Apr 8, 2015 = at 11:54 PM, Liaw, Huat (MTO) <Huat.Liaw@ontario.ca> wrot= e:

hadoop fs -put <source= > <destination> Copy from remote location to HDFS

=C2=A0

From: sandeep vura [mailto:sandeepvura@gmail.com]
Sent: April 8, 2015 2:24 PM
To: user= @hadoop.apache.org
Subject: Re: Unable to load file from local to HDFS cluster

=C2=A0

Sorry Liaw,I tried same command but its didn't r= esolve.

=C2=A0

Regards,

Sandeep.V

=C2=A0

On Wed, Apr 8, 2015 at 11:37 PM, Liaw, Huat (MTO) &l= t;Huat.Liaw@ontar= io.ca> wrote:

Should be hadoop dfs -put=

=C2=A0

From: sandeep vura [mailto:sa= ndeepvura@gmail.com]
Sent: April 8, 2015 1:53 PM
To: user= @hadoop.apache.org
Subject: Unable to load file from local to HDFS cluster

=C2=A0

Hi,

=C2=A0

When loading a file from local to HDFS cluster using= the below command=C2=A0

=C2=A0

hadoop fs -put sales.txt /sales_dept.<= /p>

=C2=A0

Getting the following exception.Please let me know h= ow to resolve this issue asap.Please find the attached is the logs that is = displaying on namenode.

=C2=A0

Regards,

Sandeep.v

=C2=A0



--089e0103e4ceea40b205133ad7f5--