cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexander Klimetschek (JIRA)" <j...@apache.org>
Subject [jira] Updated: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline
Date Thu, 08 Mar 2007 22:24:24 GMT

     [ https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Alexander Klimetschek updated COCOON-1985:
------------------------------------------

    Attachment: caching-trials.patch

A patch for the discussion, not a final solution.... I made the waitForLock() fuzzy, so that
it returns after 250 ms of waiting (via lock.wait(250)) and tries the waiting 2 times - if
that does not work, it will accept the fact that there is no cached response available. This
avoids the deadlock, but will slow down some requests.

In my (special !?) situation this leads to an oscillating run: one request will be answered
quickly from the cache, the next request will hit the wait problem, run 2 times through the
lock (~500ms) and then rebuild the stuff from the cache. So quick - slow - quick -slow....
The setup here is only one single request! Nothing in parallel.

The sitemap is quite simple, the problem appears when requesting foobar/ with REST=xml:

  <match pattern="*/">
	<act type="authorise" src="jcr:///teamspaces/{1}/tasklist"/>
	<select type="REST">
		<when test="html">
			<read
				src="cocoon:/internal/pipe/tasks/{1}/list.html" />
		</when>
		<when test="xml">
			<read
				src="cocoon:/internal/pipe/tasks/{1}/list.xml" mime-type="text/xml"/>
		</when>
	</select>
  </match>

  <match pattern="internal/pipe/tasks/*/list.xml">
	<generate type="collection"
		src="jcr:///teamspaces/{1}/tasks/">
		<parameter name="include" value=".*\.xml" />
	</generate>
	<transform src="xslt/xml/collection2tasklist.xsl">
		<parameter name="basePath"					value="{system-property:mindquarry.server.url}{request:contextPath}{request:servletPath}/"
/>
		<parameter name="teamspace" value="{1}" />
	</transform>
	<serialize type="xml" />
  </match>


> AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline
> -----------------------------------------------------------------------------------
>
>                 Key: COCOON-1985
>                 URL: https://issues.apache.org/jira/browse/COCOON-1985
>             Project: Cocoon
>          Issue Type: Bug
>          Components: * Cocoon Core
>    Affects Versions: 2.1.9, 2.1.10, 2.1.11-dev (Current SVN), 2.2-dev (Current SVN)
>            Reporter: Ellis Pritchard
>            Priority: Critical
>             Fix For: 2.1.9, 2.1.10, 2.1.11-dev (Current SVN), 2.2-dev (Current SVN)
>
>         Attachments: caching-trials.patch, includer.xsl, patch.txt, sitemap.xmap
>
>
> Cocoon 2.1.9 introduced the concept of a lock in AbstractCachingProcessingPipeline, an
optimization to prevent two concurrent requests from generating the same cached content. The
first request adds the pipeline key to the transient cache to 'lock' the cache entry for that
pipeline, subsequent concurrent requests wait for the first request to cache the content (by
Object.lock()ing the pipeline key entry) before proceeding, and can then use the newly cached
content.
> However, this has introduced an incompatibility with the IncludeTransformer: if the inclusions
access the same yet-to-be-cached content as the root pipeline, the whole assembly hangs, since
a lock will be made on a lock already held by the same thread, and which cannot be satisfied.
> e.g.
> i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
> ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key to the transient store as
a lock.
> iii) subsequently in the root pipeline, the IncludeTransformer is run.
> iv) one of the inclusions also generates with cocoon:/foo.xml, this sub-pipeline locks
in AbstractProcessingPipeline.waitForLock() because the sub-pipeline key is already present.
> v) deadlock.
> I've found a (partial, see below) solution for this: instead of a plain Object being
added to the transient store as the lock object, the Thread.currentThread() is added; when
waitForLock() is called, if the lock object exists, it checks that it is not the same thread
before attempting to lock it; if it is the same thread, then waitForLock() returns success,
which allows generation to proceed. You loose the efficiency of generating the cache only
once in this case, but at least it doesn't hang! With JDK1.5 this can be made neater by using
Thread#holdsLock() instead of adding the thread object itself to the transient store.
> See patch file.
> However, even with this fix, parallel includes (when enabled) may still hang, because
they pass the not-the-same-thread test, but fail because the root pipeline, which holds the
initial lock, cannot complete (and therefore statisfy the lock condition for the parallel
threads), before the threads themselves have completed, which then results in a deadlock again.
> The complete solution is probably to avoid locking if the lock is held by the same top-level
Request, but that requires more knowledge of Cocoon's processing than I (currently) have!
> IMHO unless a complete solution is found to this, then this optimization should be removed
completely, or else made optional by configuration, since it renders the IncludeTransformer
dangerous.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message