accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christopher Tubbs (JIRA)" <>
Subject [jira] [Created] (ACCUMULO-2764) Stopping MAC before it's processes have fully started causes an indefinite hang
Date Wed, 30 Apr 2014 18:48:21 GMT
Christopher Tubbs created ACCUMULO-2764:

             Summary: Stopping MAC before it's processes have fully started causes an indefinite
                 Key: ACCUMULO-2764
             Project: Accumulo
          Issue Type: Bug
          Components: mini
    Affects Versions: 1.6.0
         Environment: OpenJDK 1.6.0, CentOS 6.5, 2CPU, 6GB RAM (virtual hardware)
            Reporter: Christopher Tubbs
             Fix For: 1.6.1, 1.7.0

I saw this testing 1.6.0-RC5.

Calling process.destroy() and then process.waitFor(), as MiniAccumuloCluster does in it's
stop method, before the process is fully started, appears to create an indefinite hang.

I saw this most recently in MiniAccumuloClusterGCTest.testAccurateProcessListReturned, which
gets a ProcessReference and then immediately shuts down MAC, though it was also the root cause
of ACCUMULO-2756. In this instance, the test got stuck in the MAC teardown.

"main" prio=10 tid=0x00007f3cf4008800 nid=0x2b19 in Object.wait() [0x00007f3cf8f9c000]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00000000e29dd2e8> (a java.lang.UNIXProcess)
        at java.lang.Object.wait(
        at java.lang.UNIXProcess.waitFor(
        - locked <0x00000000e29dd2e8> (a java.lang.UNIXProcess)
        at org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl.stop(
        at org.apache.accumulo.minicluster.impl.MiniAccumuloClusterGCTest.tearDownMiniCluster(
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
        at java.lang.reflect.Method.invoke(
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(
        at org.junit.internal.runners.statements.RunAfters.evaluate(
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(
        at org.apache.maven.surefire.booter.ForkedBooter.main(

It appears that destroy() doesn't actually succeed in destroying a process which is still
starting, so the waitFor() waits indefinitely. I haven't debugged further. It may be a JVM
bug, or a limitation in the java Process API, or some UNIX signal handling quirk with process
instantiation that destroy() cannot know.

One fix could be to make start() wait until the metadata table can be scanned before it returns,
to ensure all processes are actually running and ready. Another fix would be to have the teardown
code try another destroy if waitFor() doesn't return after a reasonable amount of time.

This message was sent by Atlassian JIRA

View raw message