Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AF141175A7 for ; Fri, 27 Mar 2015 05:53:07 +0000 (UTC) Received: (qmail 95773 invoked by uid 500); 27 Mar 2015 05:52:53 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 95719 invoked by uid 500); 27 Mar 2015 05:52:53 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 95707 invoked by uid 99); 27 Mar 2015 05:52:53 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Mar 2015 05:52:53 +0000 Date: Fri, 27 Mar 2015 05:52:53 +0000 (UTC) From: "zhouyingchao (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-7212) Huge number of BLOCKED threads rendering DataNodes useless MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383378#comment-14383378 ] zhouyingchao commented on HDFS-7212: ------------------------------------ I'm using 2.6 and also noticed that sometime DN's heartbeat were delayed for very long time, say more than 100 seconds. I get the jstack twice and looks like they are all blocked by dataset lock, and which is held by a thread that is calling createTemporary, which is blocked to wait earlier incarnation writer to exit. "DataXceiver for client at XXXXX" daemon prio=10 tid=0x00007f14041e6480 nid=0x52bc in Object.wait() [0x00007f11d78f7000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1194) - locked <0x00000007a33b85d8> (a org.apache.hadoop.util.Daemon) at org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:183) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:1231) - locked <0x00000007b01428c0> (a org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:114) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:179) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) at java.lang.Thread.run(Thread.java:662) > Huge number of BLOCKED threads rendering DataNodes useless > ---------------------------------------------------------- > > Key: HDFS-7212 > URL: https://issues.apache.org/jira/browse/HDFS-7212 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 2.4.0 > Environment: PROD > Reporter: Istvan Szukacs > > There are 3000 - 8000 threads in each datanode JVM, blocking the entire VM and rendering the service unusable, missing heartbeats and stopping data access. The threads look like this: > {code} > 3415 (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) > - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=186 (Compiled frame) > - java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() @bci=1, line=834 (Interpreted frame) > - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node, int) @bci=67, line=867 (Interpreted frame) > - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, line=1197 (Interpreted frame) > - java.util.concurrent.locks.ReentrantLock$NonfairSync.lock() @bci=21, line=214 (Compiled frame) > - java.util.concurrent.locks.ReentrantLock.lock() @bci=4, line=290 (Compiled frame) > - org.apache.hadoop.net.unix.DomainSocketWatcher.add(org.apache.hadoop.net.unix.DomainSocket, org.apache.hadoop.net.unix.DomainSocketWatcher$Handler) @bci=4, line=286 (Interpreted frame) > - org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(java.lang.String, org.apache.hadoop.net.unix.DomainSocket) @bci=169, line=283 (Interpreted frame) > - org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(java.lang.String) @bci=212, line=413 (Interpreted frame) > - org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(java.io.DataInputStream) @bci=13, line=172 (Interpreted frame) > - org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(org.apache.hadoop.hdfs.protocol.datatransfer.Op) @bci=149, line=92 (Compiled frame) > - org.apache.hadoop.hdfs.server.datanode.DataXceiver.run() @bci=510, line=232 (Compiled frame) > - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame) > {code} > Has anybody seen this before? -- This message was sent by Atlassian JIRA (v6.3.4#6332)