incubator-allura-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dave Brondsema <>
Subject Re: ideas for caching wiki pages, etc
Date Fri, 25 Oct 2013 15:51:11 GMT
I liked this idea initially but I've been thinking about it more and I'm not
sure if its worth the complexity yet.  Markdown that is slow and also has macros
is probably slow *because* of the macros (e.g. lots of wiki includes pulling in
lots of content), so this strategy doesn't gain us anything over not caching.
This strategy would help for markdown that is slow (e.g. lots of backslashes, or
HTML tags) and also a macro in it too.  But I think those types of markdown text
are uncommon.

On 10/21/13 4:25 PM, Cory Johns wrote:
> I think that moving macro processing to a (post-)post-processing step is a
> good idea.  If we wanted to get really crazy with it, we could actually
> turn all macro calls into AJAX requests.  :-p
> But the current macro processor already uses a placeholder and puts its
> contents in htmlStash (see line 202-204 in
> Allura/allura/lib/; it should be easy to modify that
> to create the placeholder based on the macro name / params ( and
> then later to manually call ForgeMacroPattern().macro() with the
> reconstructed macro name / params.
> On Mon, Oct 21, 2013 at 1:59 PM, Igor Bondarenko <> wrote:
>> Another solution could be: cache only static content of the page and always
>> re-render macros. Something like this will do:
>> Before putting a page to cache - strip out all macros from source markdown
>> and replace them with some text (e.g. MACRO:<macro-hash>). Render resulting
>> markdown and put it to cache.  Before displaying page from cahce - find all
>> macros in the source markdown, render each separately and replace
>> corresponding MACRO:<macro-hash> with rendered html.
>> It's trickier than Dave's option, though.
>> On Mon, Oct 21, 2013 at 8:07 PM, Dave Brondsema <>
>> wrote:
>>> I'd like to address soon.
>>>  The
>>> summary is that we currently have a max char size for rendering markdown,
>>> since
>>> sometimes it can be extremely slow to render (and we've tried to improve
>>> that
>>> with no luck).  A max char size is ugly though and we don't want that.
>>  We
>>> added
>>> caching for markdown rendering recently, but have only applied it to
>>> comments
>>> ("posts") so far.  If we expand it to wiki pages, tickets, etc, then the
>>> max
>>> char limit can be removed or made much much higher.  But it's more likely
>>> that a
>>> macro (e.g. include another page) will be used in wiki and tickets and
>>> then our
>>> simple caching strategy won't work well because the macro won't be
>> re-run.
>>> Anyone have ideas for how to do cache invalidation in that situation?
>>  One
>>> idea
>>> I have is pretty crude, but might work: check to see if there are any
>>> macros in
>>> the markdown (search '[[') and never cache those.
>>> --
>>> Dave Brondsema :
>>> : personal
>>> : programming
>>>               <><
>> --
>> Igor Bondarenko

Dave Brondsema : : personal : programming

View raw message