hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Sirianni (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10147) Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster
Date Fri, 06 Dec 2013 20:31:37 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841668#comment-13841668
] 

Eric Sirianni commented on HADOOP-10147:
----------------------------------------

For those interested in the details, here are the stack traces exhibiting the deadlock.  

Upon further reflection, this is actually caused by starting two DataNode instances _in parallel_,
which {{MiniDFSCluster}} itself does _not_ do.  However, we are using an alternative JUnit
test fixture that does initialize DataNode instances in parallel.  Regardless, it would be
beneficial to upgrade commons-logging to avoid this potential deadlock scenario.

{noformat}
"thread-1":
	at org.apache.commons.logging.impl.WeakHashtable.purge(WeakHashtable.java:321)
	- waiting to lock <0x00000002213134f0> (a java.lang.ref.ReferenceQueue)
	at org.apache.commons.logging.impl.WeakHashtable.rehash(WeakHashtable.java:312)
	at java.util.Hashtable.put(Hashtable.java:429)
	- locked <0x00000002213356c8> (a org.apache.commons.logging.impl.WeakHashtable)
	at org.apache.commons.logging.impl.WeakHashtable.put(WeakHashtable.java:242)
	at org.apache.commons.logging.LogFactory.cacheFactory(LogFactory.java:1004)
	at org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:657)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
	at org.apache.jasper.servlet.JspServlet.<init>(JspServlet.java:59)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
	at java.lang.Class.newInstance0(Class.java:372)
	at java.lang.Class.newInstance(Class.java:325)
	at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153)
	- locked <0x0000000249f446d8> (a org.mortbay.jetty.servlet.ServletHolder)
	at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:428)
	at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249f44750> (a java.lang.Object)
	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:736)
	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249ee34b0> (a java.lang.Object)
	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249ede4d8> (a java.lang.Object)
	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
	at org.mortbay.jetty.Server.doStart(Server.java:224)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249ede0b8> (a java.lang.Object)
	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:824)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:392)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:738)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:311)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1814)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1714)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1752)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1743)
{noformat}

{noformat}
"thread-2":
	at java.util.Hashtable.remove(Hashtable.java:452)
	- waiting to lock <0x00000002213356c8> (a org.apache.commons.logging.impl.WeakHashtable)
	at org.apache.commons.logging.impl.WeakHashtable.purgeOne(WeakHashtable.java:338)
	- locked <0x00000002213134f0> (a java.lang.ref.ReferenceQueue)
	at org.apache.commons.logging.impl.WeakHashtable.put(WeakHashtable.java:238)
	at org.apache.commons.logging.LogFactory.cacheFactory(LogFactory.java:1004)
	at org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:657)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
	at org.apache.jasper.servlet.JspServlet.<init>(JspServlet.java:59)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
	at java.lang.Class.newInstance0(Class.java:372)
	at java.lang.Class.newInstance(Class.java:325)
	at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153)
	- locked <0x0000000249f3e988> (a org.mortbay.jetty.servlet.ServletHolder)
	at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:428)
	at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249f3ea00> (a java.lang.Object)
	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:736)
	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249ee0888> (a java.lang.Object)
	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249ee03b8> (a java.lang.Object)
	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
	at org.mortbay.jetty.Server.doStart(Server.java:224)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	- locked <0x0000000249edb990> (a java.lang.Object)
	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:824)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:392)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:738)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:311)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1814)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1714)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1752)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1743)
{noformat}

> Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-10147
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10147
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: build
>    Affects Versions: 2.2.0
>            Reporter: Eric Sirianni
>            Priority: Minor
>
> There is a deadlock in commons-logging 1.1.1 (see LOGGING-119) that can manifest itself
while running {{MiniDFSCluster}} JUnit tests.
> This deadlock has been fixed in commons-logging 1.1.2.  The latest version available
is commons-logging 1.1.3, and Hadoop should upgrade to that in order to address this deadlock.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message