Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 70901112F2 for ; Fri, 25 Jul 2014 08:09:35 +0000 (UTC) Received: (qmail 2416 invoked by uid 500); 25 Jul 2014 08:09:29 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 2296 invoked by uid 500); 25 Jul 2014 08:09:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 2286 invoked by uid 99); 25 Jul 2014 08:09:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 25 Jul 2014 08:09:29 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of sshi@gopivotal.com designates 209.85.192.174 as permitted sender) Received: from [209.85.192.174] (HELO mail-pd0-f174.google.com) (209.85.192.174) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 25 Jul 2014 08:09:26 +0000 Received: by mail-pd0-f174.google.com with SMTP id fp1so5233996pdb.5 for ; Fri, 25 Jul 2014 01:09:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=qka14cLL1MVdfQ2kjzbfixGM6goxZK+zGmS+4JchccA=; b=UrEBZEnWkuV5zLp90Y/KB7gY05L4zF6UJh3r5N3Pt5zJI9GyFF20Xu/p94cW1PnHK3 tRIrHYbm6TpxhHbvFMWRcD6ZH/EVO5xlDqhOYGnjAwwawJMP8vhQU8OSHyqdFeDMwIgz t0qxLc7U85yU4beZvY9BmYb3xnpunubdarxnLm7r0YbToCktBhd8JbzY0DMgHm6w5cxE cCysqzaMECb2k9OM8XCUCNqY7YCjBpIIYkMdxK6xS3d9pgbPD3J1m7tOO/gtDn2Nq5Ov sv6l9l9UNIzQ50G4Cv+FT5Tkah1a9oPjow6yACTA5YD8SgPcgTUbSV85aA44ZqinLqJI K3Xg== X-Gm-Message-State: ALoCoQnpJRJ1KFYyptoyvqFQDNcCQQ1IZCtgwVeVtEmIsuz/SmnIjYAvsmEWHmJpzWmIafio7npn MIME-Version: 1.0 X-Received: by 10.68.196.233 with SMTP id ip9mr10440025pbc.109.1406275739601; Fri, 25 Jul 2014 01:08:59 -0700 (PDT) Received: by 10.70.2.101 with HTTP; Fri, 25 Jul 2014 01:08:59 -0700 (PDT) In-Reply-To: References: Date: Fri, 25 Jul 2014 16:08:59 +0800 Message-ID: Subject: Re: issue about distcp " Source and target differ in block-size. Use -pb to preserve block-sizes during copy." From: Stanley Shi To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=e89a8ff1c070ea6ab204ff001514 X-Virus-Checked: Checked by ClamAV on apache.org --e89a8ff1c070ea6ab204ff001514 Content-Type: text/plain; charset=UTF-8 Your client side was running at "14/07/24 18:35:58 INFO mapreduce.Job: T***", But you are pasting NN log at "2014-07-24 17:39:34,255"; By the way, which version of HDFS are you using? Regards, *Stanley Shi,* On Fri, Jul 25, 2014 at 10:36 AM, ch huang wrote: > 2014-07-24 17:33:04,783 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: > Operation category READ is not supported in state standby > 2014-07-24 17:33:05,742 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: > Operation category READ is not supported in state standby > 2014-07-24 17:33:33,179 INFO > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log > roll on remote NameNode hz24/192.168.10.24:8020 > 2014-07-24 17:33:33,442 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Reading > org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344 > expecting start txid #62525 > 2014-07-24 17:33:33,442 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file > http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, > > http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c > 2014-07-24 17:33:33,442 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding > stream ' > http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, > > http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' > to transaction ID 62525 > 2014-07-24 17:33:33,442 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding > stream ' > http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' > to transaction ID 62525 > 2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010 > 192.168.10.50:50010 > 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.50:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]} > size 0 > 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.51:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]} > size 0 > 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.49:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]} > size 0 > 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.51:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]} > size 0 > 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.49:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]} > size 0 > 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.50:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]} > size 0 > 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010 > 192.168.10.51:50010 > 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.49:50010 is added to blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} > size 0 > ................................. > > 2014-07-24 17:35:33,573 INFO > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log > roll on remote NameNode hz24/192.168.10.24:8020 > 2014-07-24 17:35:33,826 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Reading > org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649 > expecting start txid #62721 > 2014-07-24 17:35:33,826 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file > http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c > 2014-07-24 17:35:33,826 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding > stream ' > http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' > to transaction ID 62721 > 2014-07-24 17:35:33,826 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding > stream ' > http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' > to transaction ID 62721 > 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.49:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} > size 0 > 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.51:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} > size 0 > 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 192.168.10.50:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], > ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} > size 0 > 2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010 > 192.168.10.50:50010 > 2014-07-24 17:35:33,871 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file > http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c > of size 1385 edits # 16 loaded in 0 seconds > 2014-07-24 17:35:33,872 INFO > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits > starting from txid 62720 > 2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643] > 2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643] > 2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643] > 2014-07-24 17:37:33,915 INFO > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log > roll on remote NameNode hz24/192.168.10.24:8020 > 2014-07-24 17:37:34,194 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Reading > org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda > expecting start txid #62737 > 2014-07-24 17:37:34,195 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file > http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, > > http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c > 2014-07-24 17:37:34,195 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding > stream ' > http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, > > http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' > to transaction ID 62737 > 2014-07-24 17:37:34,195 INFO > org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding > stream ' > http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' > to transaction ID 62737 > 2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010 > 192.168.10.50:50010 > 2014-07-24 17:37:34,224 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file > http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0 > : > 2014-07-24 17:37:34,225 INFO > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits > starting from txid 62736 > 2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644] > 2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644] > 2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644] > 2014-07-24 17:39:34,255 INFO > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log > roll on remote NameNode hz24/192.168.10.24:8020 > > > On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi wrote: > >> Would you please also past the corresponding namenode log? >> >> Regards, >> *Stanley Shi,* >> >> >> >> On Fri, Jul 25, 2014 at 9:15 AM, ch huang wrote: >> >>> hi,maillist: >>> i try to copy data from my old cluster to new cluster,i get >>> error ,how to handle this? >>> >>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id : >>> attempt_1406182801379_0004_m_000000_1, Status : FAILED >>> Error: java.io.IOException: File copy failed: >>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 --> >>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001 >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262) >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229) >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) >>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) >>> at >>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) >>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) >>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:415) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) >>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) >>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying >>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to >>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001 >>> at >>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258) >>> ... 10 more >>> Caused by: java.io.IOException: Error writing request body to server >>> at >>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192) >>> at >>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175) >>> at >>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) >>> at >>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) >>> at java.io.DataOutputStream.write(DataOutputStream.java:107) >>> at >>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) >>> at >>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231) >>> at >>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164) >>> at >>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118) >>> at >>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95) >>> at >>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) >>> ... 11 more >>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0% >>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0% >>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id : >>> attempt_1406182801379_0004_m_000000_2, Status : FAILED >>> Error: java.io.IOException: File copy failed: >>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 --> >>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001 >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262) >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229) >>> at >>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) >>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) >>> at >>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) >>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) >>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:415) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) >>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) >>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying >>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to >>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001 >>> at >>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) >>> >> >> > --e89a8ff1c070ea6ab204ff001514 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Your client side was running at "14/07/24 18:35= :58 INFO mapreduce.Job: T***",=C2=A0But you are pasting NN log= at "2014-07-24 17:39:34,255";=C2=A0

By the way, which version of HDFS are you using?=C2=A0<= /div>

Regards,
Stanley Shi,



On Fri, Jul 25, 2014 at 10:36 AM, ch hua= ng <justlooks@gmail.com> wrote:
2014-07-24 17:33:04,783 WARN org.apache.hadoop.security.UserGroupInfor= mation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.h= adoop.ipc.StandbyException: Operation category READ is not supported in sta= te standby
2014-07-24 17:33:05,742 WARN org.apache.hadoop.security.UserGroupInformatio= n: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hadoop= .ipc.StandbyException: Operation category READ is not supported in state st= andby
2014-07-24 17:33:33,179 INFO org.apache.hadoop.hdfs.server.namenode.ha= .EditLogTailer: Triggering log roll on remote NameNode hz24/192.168.10.24:8020
2014-07-= 24 17:33:33,442 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Readin= g org.apache.hadoop.hdfs.server.namen= ode.RedundantEditLogInputStream@67698344 expecting start txid #62525 2014-07-24 17:33:33,442 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Start loading edits file http://hz24:8480/getJ= ournal?jid=3Ddevelop&segmentTxId=3D62525&storageInfo=3D-55%3A466484= 546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, http://hz23:8480/getJournal?jid=3Ddevelop&segmentTxId=3D625= 25&storageInfo=3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19= f2809b7c
2014-07-24 17:33:33,442 INFO org.apache.hadoop.hdfs.server.namenode.EditLog= InputStream: Fast-forwarding stream 'http://= hz24:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62525&storageInfo= =3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, http://hz23:8480/getJournal?jid=3Ddevelop&s= egmentTxId=3D62525&storageInfo=3D-55%3A466484546%3A0%3ACID-a140fb1a-ac1= 0-4053-8b91-8f19f2809b7c' to transaction ID 62525
2014-07-24 17:33:33,442 INFO org.apache.hadoop.hdfs.server.namenode.EditLog= InputStream: Fast-forwarding stream 'http://= hz24:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62525&storageInfo= =3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' = to transaction ID 62525
2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates: blk= _10737= 53268_12641 19= 2.168.10.51:50010 192.168.10.49:50010 192.168.10.50:50010
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.50:50010 is added to blk_1073753337_12710{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]} si= ze 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.51:50010 is added to blk_1073753337_12710{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]} si= ze 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.49:50010 is added to blk_1073753337_12710{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]} si= ze 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.51:50010 is added to blk_1073753338_12711{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]} si= ze 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.49:50010 is added to blk_1073753338_12711{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]} si= ze 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.50:50010 is added to blk_1073753338_12711{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]} si= ze 0
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates: blk= _10737= 53338_12711 19= 2.168.10.50:50010 192.168.10.49:50010 192.168.10.51:50010
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.49:50010 is added to blk_1073753339_12712{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} si= ze 0
.................................
=C2=A0
2014-07-24 17:35:33,573 INFO org.apache.hadoop.hdfs.server.namenode.ha= .EditLogTailer: Triggering log roll on remote NameNode hz24/192.168.10.24:8020
2014-07-= 24 17:35:33,826 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Readin= g org.apache.hadoop.hdfs.server.namen= ode.RedundantEditLogInputStream@3a7ff649 expecting start txid #62721 2014-07-24 17:35:33,826 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Start loading edits file http://hz23:8480/getJ= ournal?jid=3Ddevelop&segmentTxId=3D62721&storageInfo=3D-55%3A466484= 546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:35:33,826 INFO org.apache.hadoop.hdfs.server.namenode.EditLog= InputStream: Fast-forwarding stream 'http://= hz23:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62721&storageInfo= =3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' = to transaction ID 62721
2014-07-24 17:35:33,826 INFO org.apache.hadoop.hdfs.server.namenode.EditLog= InputStream: Fast-forwarding stream 'http://= hz23:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62721&storageInfo= =3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' = to transaction ID 62721
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.49:50010 is added to blk_1073753367_12740{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} si= ze 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.51:50010 is added to blk_1073753367_12740{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} si= ze 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock: block= Map updated: 192.1= 68.10.50:50010 is added to blk_1073753367_12740{blockUCState=3DUNDER_CONS= TRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[[DISK= ]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW], ReplicaUnderConstruct= ion[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW], ReplicaUnder= Construction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]} si= ze 0
2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates: blk= _1073753270_12643 = 192.168.10.49:50010 192.168.10.51:50010 192.168.10.50:50010
2014-07-24 17:35:33,871 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Edits file http://hz23:8480/getJournal?jid=3Dd= evelop&segmentTxId=3D62721&storageInfo=3D-55%3A466484546%3A0%3ACID-= a140fb1a-ac10-4053-8b91-8f19f2809b7c of size 1385 edits # 16 loaded in = 0 seconds
2014-07-24 17:35:33,872 INFO org.apache.hadoop.hdfs.server.namenode.ha.Edit= LogTailer: Loaded 16 edits starting from txid 62720
2014-07-24 17:35:34,= 042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks: ask <= a href=3D"http://192.168.10.49:50010" target=3D"_blank">192.168.10.49:50010= to delete [blk_1073753270_12643]
2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* Inv= alidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
2014-07-24 17:= 35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks:= ask 192.168.10.51= :50010 to delete [blk_1073753270_12643]
2014-07-24 17:37:33,915 INFO org.apache.hadoop.hdfs.server.namenode.ha.Edit= LogTailer: Triggering log roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17= :37:34,194 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.R= edundantEditLogInputStream@5ed5ecda expecting start txid #62737
2014-07-24 17:37:34,195 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Start loading edits file http://hz24:8480/getJ= ournal?jid=3Ddevelop&segmentTxId=3D62737&storageInfo=3D-55%3A466484= 546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, http://hz23:8480/getJournal?jid=3Ddevelop&segmentTxId=3D627= 37&storageInfo=3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19= f2809b7c
2014-07-24 17:37:34,195 INFO org.apache.hadoop.hdfs.server.namenode.EditLog= InputStream: Fast-forwarding stream 'http://= hz24:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62737&storageInfo= =3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c, http://hz23:8480/getJournal?jid=3Ddevelop&s= egmentTxId=3D62737&storageInfo=3D-55%3A466484546%3A0%3ACID-a140fb1a-ac1= 0-4053-8b91-8f19f2809b7c' to transaction ID 62737
2014-07-24 17:37:34,195 INFO org.apache.hadoop.hdfs.server.namenode.EditLog= InputStream: Fast-forwarding stream 'http://= hz24:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62737&storageInfo= =3D-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c' = to transaction ID 62737
2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates: blk= _1073753271_12644 = 192.168.10.51:50010 192.168.10.49:50010 192.168.10.50:50010
2014-07-24 17:37:34,224 INFO org.apache.hadoop.hdfs.server.namenode.FSImage= : Edits file htt= p://hz24:8480/getJournal?jid=3Ddevelop&segmentTxId=3D62737&storageI= nfo=3D-55%3A466484546%3A0:
2014-07-24 17:37:34,225 INFO org.apache.hadoop.hdfs.server.namenode.ha= .EditLogTailer: Loaded 3 edits starting from txid 62736
2014-07-24 17:37= :37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks: a= sk 192.168.10.51:5= 0010 to delete [blk_1073753271_12644]
2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* Inv= alidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
2014-07-24 17:= 37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks:= ask 192.168.10.50= :50010 to delete [blk_1073753271_12644]
2014-07-24 17:39:34,255 INFO org.apache.hadoop.hdfs.server.namenode.ha.Edit= LogTailer: Triggering log roll on remote NameNode hz24/192.168.10.24:8020


On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <sshi@gopivotal.com> wrote:
Would you please also past the corresponding namenode log?=

Regards,
Stanley Shi,



On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju= stlooks@gmail.com> wrote:
hi,maillist:
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 i try to = copy data from my old cluster=C2=A0to new cluster,i get error ,how to handl= e this?
=C2=A0
14/07/24 18:35:58 INFO mapreduce.Job: Task Id : attempt_1406182801379_= 0004_m_000000_1, Status : FAILED
Error: java.io.IOException: File copy f= ailed: webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 --> webhdfs= ://develop/tmp/pipe_url_bak/part-m-00001
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.mapre= d.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.mapred.CopyMapper.map(C= opyMapper.java:229)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.ap= ache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapreduce.M= apper.run(Mapper.java:145)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at= org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapred.MapTask.ru= n(MapTask.java:340)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.ap= ache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.security.AccessControlle= r.doPrivileged(Native Method)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= at javax.security.auth.Subject.doAs(Subject.java:415)
=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.security.UserGroupInformat= ion.doAs(UserGroupInformation.java:1548)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapred.Yarn= Child.main(YarnChild.java:163)
Caused by: java.io.IOException: Couldn= 9;t run retriable-command: Copying webhdfs://CH22:50070/mytest/pipe_url_bak= /part-m-00001 to webhdfs://develop/tmp/pipe_url_bak/part-m-00001
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.util.= RetriableCommand.execute(RetriableCommand.java:101)
=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.mapred.CopyMapper.copyF= ileWithRetry(CopyMapper.java:258)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 ... 10 more
Caused by: java.io.IOException: Error writing request= body to server
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.net.www.protocol.http.Htt= pURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192= )
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.net.www.protocol.htt= p.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175= )
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.io.BufferedOutputStream.= write(BufferedOutputStream.java:122)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FS= DataOutputStream.java:58)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = java.io.DataOutputStream.write(DataOutputStream.java:107)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.io.BufferedOutputStream.= write(BufferedOutputStream.java:122)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBy= tes(RetriableFileCopyCommand.java:231)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyTo= TmpFile(RetriableFileCopyCommand.java:164)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.mapre= d.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.mapred.R= etriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.util.= RetriableCommand.execute(RetriableCommand.java:87)
=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 ... 11 more
14/07/24 18:35:59 INFO mapreduce.Job:=C2=A0 map 16% reduce 0%
14/07= /24 18:39:39 INFO mapreduce.Job:=C2=A0 map 17% reduce 0%
14/07/24 19:04:= 27 INFO mapreduce.Job: Task Id : attempt_1406182801379_0004_m_000000_2, Sta= tus : FAILED
Error: java.io.IOException: File copy failed: webhdfs://CH22:50070/mytest/p= ipe_url_bak/part-m-00001 --> webhdfs://develop/tmp/pipe_url_bak/part-m-0= 0001
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.too= ls.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.mapre= d.CopyMapper.map(CopyMapper.java:229)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:= 45)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapr= educe.Mapper.run(Mapper.java:145)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapred.MapT= ask.runNewMapper(MapTask.java:764)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapred.YarnChild$= 2.run(YarnChild.java:168)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.security.AccessControlle= r.doPrivileged(Native Method)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= at javax.security.auth.Subject.doAs(Subject.java:415)
=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.security.UserGroupInformat= ion.doAs(UserGroupInformation.java:1548)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.mapred.Yarn= Child.main(YarnChild.java:163)
Caused by: java.io.IOException: Couldn= 9;t run retriable-command: Copying webhdfs://CH22:50070/mytest/pipe_url_bak= /part-m-00001 to webhdfs://develop/tmp/pipe_url_bak/part-m-00001
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.tools.util.= RetriableCommand.execute(RetriableCommand.java:101)
<= /div>


--e89a8ff1c070ea6ab204ff001514--