Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 3F9EE200C0F for ; Thu, 2 Feb 2017 17:17:59 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 3E133160B57; Thu, 2 Feb 2017 16:17:59 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 60423160B54 for ; Thu, 2 Feb 2017 17:17:58 +0100 (CET) Received: (qmail 23425 invoked by uid 500); 2 Feb 2017 16:17:57 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 23411 invoked by uid 99); 2 Feb 2017 16:17:57 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 Feb 2017 16:17:57 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id E530C1A0554 for ; Thu, 2 Feb 2017 16:17:56 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.199 X-Spam-Level: X-Spam-Status: No, score=-1.199 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 7JnmJ_otF1en for ; Thu, 2 Feb 2017 16:17:55 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 913AD5F5F8 for ; Thu, 2 Feb 2017 16:17:54 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 70059E0324 for ; Thu, 2 Feb 2017 16:17:53 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 54F6C2528C for ; Thu, 2 Feb 2017 16:17:52 +0000 (UTC) Date: Thu, 2 Feb 2017 16:17:52 +0000 (UTC) From: "Steven Rand (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (YARN-6013) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 02 Feb 2017 16:17:59 -0000 [ https://issues.apache.org/jira/browse/YARN-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850106#comment-15850106 ] Steven Rand commented on YARN-6013: ----------------------------------- [~djp], I'm wondering whether you have any opinions here, since you've been working on the 2.8.0 release. I could be wrong, of course, but I'm concerned that this is a non-trivial regression from 2.7.3, and I think it'd be great if we could fix this (or determine that I'm just doing something wrong) before 2.8.0 is released. > ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled > -------------------------------------------------------------------------------------------------- > > Key: YARN-6013 > URL: https://issues.apache.org/jira/browse/YARN-6013 > Project: Hadoop YARN > Issue Type: Bug > Components: client, yarn > Affects Versions: 2.8.0 > Reporter: Steven Rand > Priority: Critical > Attachments: YARN-6013-branch-2.8.0.001.patch, yarn-rm-log.txt > > > When privacy is enabled for RPC (hadoop.rpc.protection = privacy), {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) fails with an EOFException. I've reproduced this with Spark 2.0.2 built against latest branch-2.8 and with a simple distcp job on latest branch-2.8. > Steps to reproduce using distcp: > 1. Set hadoop.rpc.protection equal to privacy > 2. Write data to HDFS. I did this with Spark as follows: > {code} > sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, org.apache.commons.lang.RandomStringUtils.random(1024, "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData") > {code} > 3. Attempt to distcp that data to another location in HDFS. For example: > {code} > hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData hdfs:///tmp/testDataCopy > {code} > I observed this error in the ApplicationMaster's syslog: > {code} > 2016-12-19 19:13:50,097 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1482189777425_0004, File: hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist > 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0 > 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1482189777425_0004: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit= knownNMs=3 > 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after sleeping for 30000ms. > java.io.EOFException: End of File Exception between local host is: "/"; destination host is: "":8030; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486) > at org.apache.hadoop.ipc.Client.call(Client.java:1428) > at org.apache.hadoop.ipc.Client.call(Client.java:1338) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy80.allocate(Unknown Source) > at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy81.allocate(Unknown Source) > at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:204) > at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:735) > at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:269) > at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:281) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.EOFException > at java.io.DataInputStream.readInt(DataInputStream.java:392) > at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1785) > at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1156) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1053) > {code} > Marking as "critical" since this blocks YARN users from encrypting RPC in their Hadoop clusters. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org