hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Maheswara Rao G 72686 <mahesw...@huawei.com>
Subject Re: help me to solve Exception
Date Wed, 15 Jun 2011 04:31:26 GMT
Hi Zinab,

1) First you can check all DNs are running or not
   Because NN will take some time (heartbeat expiry period) to detect the DN shutdown. UI
may show as live nodes at that time.

2) When NN choosing the DNs , it will check whether Node is good or not.
   Here it will check multiple conditions
     * If node has not enough space.
     * Node traffic...Node is too busy
     * If DN decommissioned.
     * If rack has too many choosen nodes

   If your target nodes are in above state, then no good node will be there to write. Then
the below exception can come.


Regards,
Uma Mahesh

******************************************************************************************
 This email and its attachments contain confidential information from HUAWEI, which is intended
only for the person or entity whose address is listed above. Any use of the information contained
here in any way (including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive
this email in error, please notify the sender by phone or email immediately and delete it!
 *****************************************************************************************

----- Original Message -----
From: Zinab Ahmed Mahmoud Elgendy <zinabelgendy@yahoo.com>
Date: Wednesday, June 15, 2011 3:43 am
Subject: help me to solve Exception
To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>

> can anyone help me to find a solution of this exception ?
> 
> 11/06/04 01:47:09 WARN hdfs.DFSClient: DataStreamer Exception: 
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
> /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, 
> instead of 1
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)   
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>     at org.apache.hadoop.ipc.Client.call(Client.java:740)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>     at $Proxy0.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)   
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at $Proxy0.addBlock(Unknown Source)
>     at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)   
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>     at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)   
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> 
> 11/06/04 01:47:09 WARN hdfs.DFSClient: Error Recovery for block 
> null bad datanode[0] nodes == null
> 11/06/04 01:47:09 WARN hdfs.DFSClient: Could not get block 
> locations. Source file "/user/eng-zinab/inn/In (copy)" - Aborting...
> copyFromLocal: java.io.IOException: File /user/eng-zinab/inn/In 
> (copy) could only be replicated to 0 nodes, instead of 1
> 11/06/04 01:47:09 ERROR hdfs.DFSClient: Exception closing file 
> /user/eng-zinab/inn/In (copy) : 
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
> /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, 
> instead of 1
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)   
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
> /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, 
> instead of 1
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)   
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>     at org.apache.hadoop.ipc.Client.call(Client.java:740)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>     at $Proxy0.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)   
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at $Proxy0.addBlock(Unknown Source)
>     at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)   
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>     at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)   
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> 
> Thank you.

Mime
View raw message