hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mostafa Gaber <moustafa.ga...@gmail.com>
Subject Re: MapReduce output could not be written
Date Tue, 05 Jul 2011 19:58:54 GMT
I faced this problem before. I was setting hadoop.tmp.dir to /tmp/..., and
my machine was running for a long time, and hence /tmp was full, so that
HDFS can't store files any more.

So, check the size of the partition where you specified hadoop.tmp.dir to
put data into. Also, try to assign hadoop.tmp.dir to another partition where
there is some space, and which is not got full fast like /tmp.

On Tue, Jul 5, 2011 at 10:33 AM, Devaraj K <devaraj.k@huawei.com> wrote:

>  Check the datanode logs, whether it is registered with namenode or not.
> At the same time you can check any problem occurred while initializing the
> datanode. If it registers successfully it shows that data node in the live
> nodes of the namenode UI.****
>
>     ****
>
> ** **
>
> Devaraj K ****
>
>
> -------------------------------------------------------------------------------------------------------------------------------------
> This e-mail and its attachments contain confidential information from
> HUAWEI, which
> is intended only for the person or entity whose address is listed above.
> Any use of the
> information contained herein in any way (including, but not limited to,
> total or partial
> disclosure, reproduction, or dissemination) by persons other than the
> intended
> recipient(s) is prohibited. If you receive this e-mail in error, please
> notify the sender by
> phone or email immediately and delete it!ss****
>
>  ****
>   ------------------------------
>
> *From:* Sudharsan Sampath [mailto:sudhan65@gmail.com]
> *Sent:* Tuesday, July 05, 2011 6:13 PM
> *To:* mapreduce-user@hadoop.apache.org
> *Subject:* MapReduce output could not be written****
>
> ** **
>
> Hi,
>
> In one of my jobs I am getting the following error.
>
> java.io.IOException: File X could only be replicated to 0 nodes, instead of
> 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
>         at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)
>
> and the job fails. I am running a single server that runs all the hadoop
> daemons. So only one datanode in my scenario.
>
> The datanode was up all the time.
> There is enough space on the disk.
> Even on debug level, I do not see any of the following logs
>
>
> Node X " is not chosen because the node is (being) decommissioned
> because the node does not have enough space
> because the node is too busy
> because the rack has too many chosen nodes
>
> Do anyone know of anyother scenario in which occur ?
>
> Thanks
> Sudharsan S****
>



-- 
Best Regards,
Mostafa Ead

Mime
View raw message