Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 98C5510F89 for ; Tue, 24 Sep 2013 23:56:22 +0000 (UTC) Received: (qmail 92105 invoked by uid 500); 24 Sep 2013 23:56:07 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 92052 invoked by uid 500); 24 Sep 2013 23:56:05 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 92017 invoked by uid 99); 24 Sep 2013 23:56:02 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Sep 2013 23:56:02 +0000 Date: Tue, 24 Sep 2013 23:56:02 +0000 (UTC) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-5255) Distcp job fails with hsftp when https is enabled MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776953#comment-13776953 ] Hadoop QA commented on HDFS-5255: --------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604879/HDFS-5255.01.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestHftpDelegationToken {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5028//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5028//console This message is automatically generated. > Distcp job fails with hsftp when https is enabled > ------------------------------------------------- > > Key: HDFS-5255 > URL: https://issues.apache.org/jira/browse/HDFS-5255 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 2.1.0-beta > Reporter: Yesha Vora > Assignee: Arpit Agarwal > Attachments: HDFS-5255.01.patch > > > Run Distcp job using hsftp when ssl is enabled. The job fails with "java.net.SocketException: Unexpected end of file from server" Error > Running: hadoop distcp hsftp://localhost:50070/f1 hdfs://localhost:19000/f5 > All the tasks fails with below error. > 13/09/23 15:52:38 INFO mapreduce.Job: Task Id : attempt_1379976241507_0004_m_000000_0, Status : FAILED > Error: java.io.IOException: File copy failed: hsftp://localhost:50070/f1 --> hdfs://localhost:19000/f5 > at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:171) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1499) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:166) > Caused by: java.io.IOException: Couldn't run retriable-command: Copying hsftp://127.0.0.1:50070/f1 to hdfs://localhost:19000/f5 > at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) > at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258) > ... 10 more > Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.IOException: HTTP_OK expected, received 500 > at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.readBytes(RetriableFileCopyCommand.java:233) > at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:198) > at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:134) > at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:101) > at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:83) > at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) > ... 11 more > Caused by: java.io.IOException: HTTP_OK expected, received 500 > at org.apache.hadoop.hdfs.HftpFileSystem$RangeHeaderUrlOpener.connect(HftpFileSystem.java:383) > at org.apache.hadoop.hdfs.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:119) > at org.apache.hadoop.hdfs.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:103) > at org.apache.hadoop.hdfs.ByteRangeInputStream.read(ByteRangeInputStream.java:187) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at java.io.FilterInputStream.read(FilterInputStream.java:107) > at org.apache.hadoop.tools.util.ThrottledInputStream.read(ThrottledInputStream.java:75) > at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.readBytes(RetriableFileCopyCommand.java:230) > ... 16 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira