Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A2C174143 for ; Tue, 5 Jul 2011 19:59:22 +0000 (UTC) Received: (qmail 58874 invoked by uid 500); 5 Jul 2011 19:59:21 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 58659 invoked by uid 500); 5 Jul 2011 19:59:21 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 58651 invoked by uid 99); 5 Jul 2011 19:59:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Jul 2011 19:59:20 +0000 X-ASF-Spam-Status: No, hits=4.0 required=5.0 tests=FREEMAIL_FROM,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of moustafa.gaber@gmail.com designates 74.125.83.176 as permitted sender) Received: from [74.125.83.176] (HELO mail-pv0-f176.google.com) (74.125.83.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Jul 2011 19:59:15 +0000 Received: by pve37 with SMTP id 37so8708314pve.35 for ; Tue, 05 Jul 2011 12:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=1VojgeTxoMdTSBuOIgUoGisDHKxAbmBnTxSzt8rFJaE=; b=j4sjWXt/1k5EnOlO1M4qwJTUQskkYRnDCIsLk0lIoOoUvh+lUDajf/27Fcxqe2cFC1 EhPB/5a0chuezlwAq0GtQ+ZlYGFqXjPzZjrpfakGGs0KrVywvbZNne+crXXHuLmXVyqb Ww9H7ZZGPFL5CVLWHnWcZjIle2qUBW2Yy8KNo= MIME-Version: 1.0 Received: by 10.68.30.39 with SMTP id p7mr5123785pbh.362.1309895935006; Tue, 05 Jul 2011 12:58:55 -0700 (PDT) Received: by 10.68.58.193 with HTTP; Tue, 5 Jul 2011 12:58:54 -0700 (PDT) In-Reply-To: References: Date: Tue, 5 Jul 2011 15:58:54 -0400 Message-ID: Subject: Re: MapReduce output could not be written From: Mostafa Gaber To: mapreduce-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec520ef81e6169a04a757e9b9 --bcaec520ef81e6169a04a757e9b9 Content-Type: text/plain; charset=ISO-8859-1 I faced this problem before. I was setting hadoop.tmp.dir to /tmp/..., and my machine was running for a long time, and hence /tmp was full, so that HDFS can't store files any more. So, check the size of the partition where you specified hadoop.tmp.dir to put data into. Also, try to assign hadoop.tmp.dir to another partition where there is some space, and which is not got full fast like /tmp. On Tue, Jul 5, 2011 at 10:33 AM, Devaraj K wrote: > Check the datanode logs, whether it is registered with namenode or not. > At the same time you can check any problem occurred while initializing the > datanode. If it registers successfully it shows that data node in the live > nodes of the namenode UI.**** > > **** > > ** ** > > Devaraj K **** > > > ------------------------------------------------------------------------------------------------------------------------------------- > This e-mail and its attachments contain confidential information from > HUAWEI, which > is intended only for the person or entity whose address is listed above. > Any use of the > information contained herein in any way (including, but not limited to, > total or partial > disclosure, reproduction, or dissemination) by persons other than the > intended > recipient(s) is prohibited. If you receive this e-mail in error, please > notify the sender by > phone or email immediately and delete it!ss**** > > **** > ------------------------------ > > *From:* Sudharsan Sampath [mailto:sudhan65@gmail.com] > *Sent:* Tuesday, July 05, 2011 6:13 PM > *To:* mapreduce-user@hadoop.apache.org > *Subject:* MapReduce output could not be written**** > > ** ** > > Hi, > > In one of my jobs I am getting the following error. > > java.io.IOException: File X could only be replicated to 0 nodes, instead of > 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469) > at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962) > > and the job fails. I am running a single server that runs all the hadoop > daemons. So only one datanode in my scenario. > > The datanode was up all the time. > There is enough space on the disk. > Even on debug level, I do not see any of the following logs > > > Node X " is not chosen because the node is (being) decommissioned > because the node does not have enough space > because the node is too busy > because the rack has too many chosen nodes > > Do anyone know of anyother scenario in which occur ? > > Thanks > Sudharsan S**** > -- Best Regards, Mostafa Ead --bcaec520ef81e6169a04a757e9b9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable I faced this problem before. I was setting hadoop.tmp.dir to /tmp/..., and = my machine was running for a long time, and hence /tmp was full, so that HD= FS can't store files any more.

So, check the size of the partit= ion where you specified hadoop.tmp.dir to put data into. Also, try to assig= n hadoop.tmp.dir to another partition where there is some space, and which = is not got full fast like /tmp.

On Tue, Jul 5, 2011 at 10:33 AM, Devaraj K <= span dir=3D"ltr"><devaraj.k@huaw= ei.com> wrote:

Check the data= node logs, whether it is registered with namenode or not. At the same time you can check any problem= occurred while initializing the datanode. If it registers successfully it shows that data node in the live nodes of the namenode UI.=

=A0=A0=A0 <= /u>

=A0<= /u>

Devaraj K=A0

-------------= ---------------------------------------------------------------------------= ---------------------------------------------
This e-mail and its attachments contain confidential information from HUAWE= I, which
is intended only for the person or entity whose address is listed above. An= y use of the
information contained herein in any way (including, but not limited to, tot= al or partial
disclosure, reproduction, or dissemination) by persons other than the inten= ded
recipient(s) is prohibited. If you receive this e-mail in error, please not= ify the sender by
phone or email immediately and delete it!ss

=A0


From: Sudharsan Sampath [mailto:sud= han65@gmail.com]
Sent: Tuesday, July 05, 20= 11 6:13 PM
To: mapre= duce-user@hadoop.apache.org
Subject: MapReduce output = could not be written

=A0

Hi,

In one of my jobs I am getting the following error.

java.io.IOException: File X could only be replicated to 0 nodes, instead of= 1
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNa= mesystem.java:1282)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.ad= dBlock(NameNode.java:469)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp= l.java:25)
=A0=A0=A0=A0=A0=A0=A0 at java.lang.reflect.Method.invoke(Method.java:597) =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessController.doPrivileged(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subject.doAs(Subject.java:396)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

and the job fails. I am running a single server that runs all the hadoop daemons. So only one datanode in my scenario.

The datanode was up all the time.
There is enough space on the disk.
Even on debug level, I do not see any of the following logs


Node X " is not chosen because the node is (being) decommissioned
because the node does not have enough space
because the node is too busy
because the rack has too many chosen nodes

Do anyone know of anyother scenario in which occur ?

Thanks
Sudharsan S




--
Best R= egards,
Mostafa Ead

--bcaec520ef81e6169a04a757e9b9--