geode-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (JIRA)" <>
Subject [jira] [Commented] (GEODE-1248) gfsh shutdown command does not shutdown members waiting for missing disk stores
Date Wed, 20 Apr 2016 21:51:25 GMT


ASF subversion and git services commented on GEODE-1248:

Commit ea97a536e9175f36c4bc8d69a89d079649d44f82 in incubator-geode's branch refs/heads/develop
from [~jens.deppe]
[;h=ea97a53 ]

GEODE-1236 GEODE-1248: Fix gfsh sutdown call

- This fixes two issues when using gfsh 'shutdown' command
- One is that the JVM can exit prematurely because all remaining threads
  are daemon threads. When coupled with network partition detection this
  can result in member departed events causing split brain scenarios -
- The other issue is that when a member is starting up it may have
  synchronized on the CacheFactory class waiting on disk store recovery.
  This prevented gfsh shutdown to run as it would also try and
  synchronize on the CacheFactory and be blocked.

> gfsh shutdown command does not shutdown members waiting for missing disk stores
> -------------------------------------------------------------------------------
>                 Key: GEODE-1248
>                 URL:
>             Project: Geode
>          Issue Type: Bug
>          Components: gfsh
>            Reporter: Dan Smith
> The gfsh shutdown command fails to shutdown members that are in a state of waiting for
another member to recover the latest data. Instead, the shutdown operation gets stuck waiting
for a lock on the cache to shutdown the member.
> Steps to reproduce.
> 1. Start a locator and two members
> 2. Create a REPLICATED_PERSISTENT region in gfsh
> > create region --name="replicate" --type=REPLICATE_PERSISTEN
> 3. Do a put (probably not necessary)
> > put --key="a" --value="a" --region=/replicate
> 4. shutdown within gfsh
> > shutdown --include-locators=false
> 5. Start one member. It will get stuck waiting for other members to start.
> 6. shutdown within gfsh again.
> > shutdown --include-locators=false
> 6. List members. You will see that the member is still up.
> > list members
> The end result after (6) is that the member is still up. In the stack dump, we see the
shutdown is blocked on the cache lock.
> {noformat}
> "Function Execution Processor1" #62 daemon prio=10 os_prio=0 tid=0x00007fe988013800 nid=0xf83a
waiting for monitor entry [0x00007fe96e062000]
>    java.lang.Thread.State: BLOCKED (on object monitor)
>     at com.gemstone.gemfire.cache.CacheFactory.getAnyInstance(
>     - waiting to lock <0x000000071f13e170> (a java.lang.Class for com.gemstone.gemfire.cache.CacheFactory)
>     at
>     at com.gemstone.gemfire.internal.cache.MemberFunctionStreamingMessage.process(
>     at com.gemstone.gemfire.distributed.internal.DistributionMessage.scheduleAction(
>     at com.gemstone.gemfire.distributed.internal.DistributionMessage$
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
>     at java.util.concurrent.ThreadPoolExecutor$
>     at com.gemstone.gemfire.distributed.internal.DistributionManager.runUntilShutdown(
>     at com.gemstone.gemfire.distributed.internal.DistributionManager$9$
>     at
> "main" #1 prio=5 os_prio=0 tid=0x00007fea0400a000 nid=0xf7dd in Object.wait() [0x00007fea0afa4000]
>    java.lang.Thread.State: TIMED_WAITING (on object monitor)
>     at java.lang.Object.wait(Native Method)
>     at com.gemstone.gemfire.internal.cache.persistence.PersistenceAdvisorImpl$MembershipChangeListener.waitForChange(
>     - locked <0x000000078b067058> (a com.gemstone.gemfire.internal.cache.persistence.PersistenceAdvisorImpl$MembershipChangeListener)
>     at com.gemstone.gemfire.internal.cache.persistence.PersistenceAdvisorImpl.getInitialImageAdvice(
>     at com.gemstone.gemfire.internal.cache.persistence.CreatePersistentRegionProcessor.getInitialImageAdvice(
>     at com.gemstone.gemfire.internal.cache.DistributedRegion.getInitialImageAndRecovery(
>     at com.gemstone.gemfire.internal.cache.DistributedRegion.initialize(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.createVMRegion(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.basicCreateRegion(
>     at com.gemstone.gemfire.internal.cache.xmlcache.RegionCreation.createRoot(
>     at com.gemstone.gemfire.internal.cache.xmlcache.CacheCreation.initializeRegions(
>     at com.gemstone.gemfire.internal.cache.xmlcache.CacheCreation.create(
>     at com.gemstone.gemfire.internal.cache.xmlcache.CacheXmlParser.create(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.loadCacheXml(
>     at com.gemstone.gemfire.internal.cache.ClusterConfigurationLoader.applyClusterConfiguration(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.requestAndApplySharedConfiguration(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.initialize(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.basicCreate(
>     at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.create(
>     at com.gemstone.gemfire.cache.CacheFactory.create(
>     - locked <0x000000071f13e170> (a java.lang.Class for com.gemstone.gemfire.cache.CacheFactory)
>     at com.gemstone.gemfire.cache.CacheFactory.create(
>     - locked <0x000000071f13e170> (a java.lang.Class for com.gemstone.gemfire.cache.CacheFactory)
>     at com.gemstone.gemfire.distributed.internal.DefaultServerLauncherCacheProvider.createCache(
>     at com.gemstone.gemfire.distributed.ServerLauncher.createCache(
>     at com.gemstone.gemfire.distributed.ServerLauncher.start(
>     at
>     at com.gemstone.gemfire.distributed.ServerLauncher.main(
> {noformat}
> The shutdown command needs to somehow trigger shutdown even if  the cache is in the state
during startup.

This message was sent by Atlassian JIRA

View raw message