hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gopy Krishna <g...@rapisource.com>
Subject Re: can't submit remote job
Date Mon, 18 May 2015 13:25:48 GMT
REMOVE

On Mon, May 18, 2015 at 6:54 AM, xeonmailinglist-gmail <
xeonmailinglist@gmail.com> wrote:

>  Hi,
>
> I am trying to submit a remote job in Yarn MapReduce, but I can’t because
> I get the error [1]. I don’t have more exceptions in the other logs.
>
> My Mapreduce runtime have 1 *ResourceManager* and 3 *NodeManagers*, and
> the HDFS is running properly (all nodes are alive).
>
> I have looked to all logs, and I still don’t understand why I get this
> error. Any help to fix this? Is it a problem of the remote job that I am
> submitting?
>
> [1]
>
> $ less logs/hadoop-ubuntu-namenode-ip-172-31-17-45.log
>
> 2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* NameNode.addBlock:
file /tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.split
> fileId=16394 for DFSClient_NONMAPREDUCE_-1923902075_14
> 2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getAdditionalBlock:
/tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.
> split inodeId 16394 for DFSClient_NONMAPREDUCE_-1923902075_14
> 2015-05-18 10:42:16,571 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
Failed to choose remote rack (location = ~/default-rack), fallback to lo
> cal rack
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:580)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:348)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:126)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1545)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>
> ​
>
> --
> --
>
>


-- 
Thanks & Regards
Gopy
Rapisource LLC
Direct: 732-419-9663
Fax: 512-287-4047
Email: gopy@rapisource.com
www.rapisource.com
http://www.linkedin.com/in/gopykrishna

According to Bill S.1618 Title III passed by the 105th US Congress,this
message is not considered as "Spam" as we have included the contact
information.If you wish to be removed from our mailing list, please respond
with "remove" in the subject field.We apologize for any inconvenience
caused.

Mime
View raw message