Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 044AE18F63 for ; Fri, 29 May 2015 15:19:27 +0000 (UTC) Received: (qmail 47647 invoked by uid 500); 29 May 2015 15:19:26 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 47597 invoked by uid 500); 29 May 2015 15:19:26 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 47585 invoked by uid 99); 29 May 2015 15:19:26 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 May 2015 15:19:26 +0000 Date: Fri, 29 May 2015 15:19:26 +0000 (UTC) From: "Hudson (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-8429) Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564957#comment-14564957 ] Hudson commented on HDFS-8429: ------------------------------ SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/]) HDFS-8429. Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread. (zhouyingchao via cmccabe) (cmccabe: rev 246cefa089156a50bf086b8b1e4d4324d66dc58c) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/unix/TestDomainSocketWatcher.java * hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c > Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread > ------------------------------------------------------------------------------------- > > Key: HDFS-8429 > URL: https://issues.apache.org/jira/browse/HDFS-8429 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 2.6.0 > Reporter: zhouyingchao > Assignee: zhouyingchao > Fix For: 2.8.0 > > Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, HDFS-8429-003.patch > > > In our cluster, an application is hung when doing a short circuit read of local hdfs block. By looking into the log, we found the DataNode's DomainSocketWatcher.watcherThread has exited with following log: > {code} > ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: Thread[Thread-25,5,main] terminating on unexpected exception > java.lang.NullPointerException > at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463) > at java.lang.Thread.run(Thread.java:662) > {code} > The line 463 is following code snippet: > {code} > try { > for (int fd : fdSet.getAndClearReadableFds()) { > sendCallbackAndRemove("getAndClearReadableFds", entries, fdSet, > fd); > } > {code} > getAndClearReadableFds is a native method which will malloc an int array. Since our memory is very tight, it looks like the malloc failed and a NULL pointer is returned. > The bad thing is that other threads then blocked in stack like this: > {code} > "DataXceiver for client unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for operation #1]" daemon prio=10 tid=0x00007f0c9c086d90 nid=0x8fc3 waiting on condition [0x00007f09b9856000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00000007b0174808> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) > at org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323) > at org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:662) > {code} > IMO, we should exit the DN so that the users can know that something go wrong and fix it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)