Return-Path: Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: (qmail 70075 invoked from network); 22 Dec 2010 03:04:23 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 22 Dec 2010 03:04:23 -0000 Received: (qmail 1602 invoked by uid 500); 22 Dec 2010 03:04:23 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 1481 invoked by uid 500); 22 Dec 2010 03:04:23 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 1473 invoked by uid 99); 22 Dec 2010 03:04:23 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Dec 2010 03:04:23 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Dec 2010 03:04:22 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id oBM341Sv004499 for ; Wed, 22 Dec 2010 03:04:02 GMT Message-ID: <30182406.257521292987041846.JavaMail.jira@thor> Date: Tue, 21 Dec 2010 22:04:01 -0500 (EST) From: "Rajesh Balamohan (JIRA)" To: common-issues@hadoop.apache.org Subject: [jira] Updated: (HADOOP-5380) Calls from running tasks to TaskTracker methods sometimes fail and incur a 60s timeout MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HADOOP-5380: ------------------------------------- Attachment: HADOOP-5380.Y.20.branch.patch Investigation: =========== 1. Task.TaskReporter thread sends status updates/pings periodically to TaskTracker. Default "PROGRESS_INTERVAL" is set to 3000 ms. If it needs to send the task progress, it sends STATUS_UPDATE message to TaskTracker. Otherwise, it sends a PING signal to check if the TaskTracker is alive. 2. When the map phase is over, it calls TaskReporter.stopCommunicationThread() which interrupts this thread. 3. If the system was trying to commnuicate with the server at the time of interrupts, it breaks the connection to the server.Since the interrupt was issued, the stream throws ClosedByInterruptException and doesn't send any information. 5. However in Client.java, Client keeps waiting for the response in Client->Connection->receiveResponse()->readInt(). After the "ipc.ping.interval", it basically timesout and rethrows this exception. Since the default "ipc.ping.value" is set to 60000ms, it waits for 1 minute before throwing this exception. This causes heavy variations in runtimes of small jobs which get executed in couple of minutes. Patch: ===== I applied a patch (one line change) which would interrupt the Client.java's Connection upon any IOException in sendParam(). I checked with hadoop-0.20.2xx version and ran PigMix benchmark. It works fine and there are no timeouts happening with this patch. > Calls from running tasks to TaskTracker methods sometimes fail and incur a 60s timeout > -------------------------------------------------------------------------------------- > > Key: HADOOP-5380 > URL: https://issues.apache.org/jira/browse/HADOOP-5380 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 0.20.1 > Reporter: Matei Zaharia > Attachments: HADOOP-5380.Y.20.branch.patch > > > I'm seeing some map tasks in my jobs take 1 minute to commit after they finish the map computation. On the map side, the output looks like this: > > 2009-03-02 21:30:54,384 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=MAP, sessionId= - already initialized > 2009-03-02 21:30:54,437 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 800 > 2009-03-02 21:30:54,437 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 300 > 2009-03-02 21:30:55,493 INFO org.apache.hadoop.mapred.MapTask: data buffer = 239075328/298844160 > 2009-03-02 21:30:55,494 INFO org.apache.hadoop.mapred.MapTask: record buffer = 786432/983040 > 2009-03-02 21:31:00,381 INFO org.apache.hadoop.mapred.MapTask: Starting flush of map output > 2009-03-02 21:31:07,892 INFO org.apache.hadoop.mapred.MapTask: Finished spill 0 > 2009-03-02 21:31:07,951 INFO org.apache.hadoop.mapred.TaskRunner: Task:attempt_200903022127_0001_m_003163_0 is done. And is in the process of commiting > 2009-03-02 21:32:07,949 INFO org.apache.hadoop.mapred.TaskRunner: Communication exception: java.io.IOException: Call to /127.0.0.1:50311 failed on local exception: java.nio.channels.ClosedChannelException > at org.apache.hadoop.ipc.Client.wrapException(Client.java:765) > at org.apache.hadoop.ipc.Client.call(Client.java:733) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at org.apache.hadoop.mapred.$Proxy0.ping(Unknown Source) > at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:525) > at java.lang.Thread.run(Thread.java:619) > Caused by: java.nio.channels.ClosedChannelException > at java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:167) > at java.nio.channels.SelectableChannel.register(SelectableChannel.java:254) > at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:331) > at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128) > at java.io.FilterInputStream.read(FilterInputStream.java:116) > at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) > at java.io.BufferedInputStream.read(BufferedInputStream.java:237) > at java.io.DataInputStream.readInt(DataInputStream.java:370) > at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) > 2009-03-02 21:32:07,953 INFO org.apache.hadoop.mapred.TaskRunner: Task 'attempt_200903022127_0001_m_003163_0' done. > > In the TaskTracker log, it looks like this: > > 2009-03-02 21:31:08,110 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call ping(attempt_200903022127_0001_m_003163_0) from 127.0.0.1:56884: output error > 2009-03-02 21:31:08,111 INFO org.apache.hadoop.ipc.Server: IPC Server handler 10 on 50311 caught: java.nio.channels.ClosedChannelException > at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:1195) > at org.apache.hadoop.ipc.Server.access$1900(Server.java:77) > at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:613) > at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:677) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:981) > > Note that the task actually seemed to commit - it didn't get speculatively executed or anything. However, the job wasn't able to continue until this one task was done. Both parties seem to think the channel was closed. How does the channel get closed externally? If closing it from outside is unavoidable, maybe the right thing to do is to set a much lower timeout, because 1 minute delay can be pretty significant for a small job. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.