apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <jerenkra...@ebuilt.com>
Subject Review of possible replacement for pools
Date Sat, 25 Aug 2001 06:04:38 GMT
Enclosed is my review of your posted patch.  Actually, I probably would
like to see Brian's patch again to compare it as I think they achieve
similar goals.

The important aspect of this patch is that it makes the pool code
easier to understand.  I'll say that this would need comments.  
I'll volunteer to add comments that if this patch is accepted.

Here goes - my comments are prefaced with a > at the beginning of
the line (the reverse of normal as I'm too lazy to insert > at the
beginning of the source)...

-------------
#define MIN_ALLOC 8192
#define MAX_INDEX   32

#define BOUNDARY_INDEX 12
#define BOUNDARY_SIZE (1 << BOUNDARY_INDEX)

> Okay so that people have a clue what is going on here, Sander has
> a freelist that is arranged by size.  It is based on the power of
> 2 that it is closest to, IIRC.  The 0 element is reserved for items
> bigger than 2^32.  Sander can correct me if I'm misremembering this.

#define ALIGN_DEFAULT(size) ALIGN(size, 8)

> Why 8 as the default?

static APR_INLINE node_t *node_malloc(allocator_t *allocator, apr_size_t size)
{
>   ...blah, blah, blah...search for it on the indexed freelist and
>   return it...if none available, call malloc...at this point, we are
>   setting up the newly allocated node.
    node->next = NULL;
    node->index = index;
    node->first_avail = (char *)node + SIZEOF_NODE_T;
    node->endp = (char *)node + size;

> Problem A.  You are adding the SIZEOF_NODE_T here that is implicitly
> included in the size parameter (most calls to node_malloc has size +
> SIZEOF_NODE_T).  Either move this portion out (not sure how you
> would do this), or have node_malloc add the space required for the
> node_t and don't call node_malloc with + SIZEOF_NODE_T.  However, that 
> strategy does present a problem in apr_pool_create_ex where you call 
> node_malloc with MIN_ALLOC as the size rather than 
> MIN_ALLOC+SIZEOF_NODE_T.  So, in order to truly get MIN_ALLOC, you'd 
> have to ask for MIN_ALLOC-SIZEOF_NODE_T.  Hmm.  But, I don't like
> this strategy where you *must* pass in an extra SIZEOF_NODE_T bytes.
> (Yes, this function isn't user-callable, but still...)

static void node_free(allocator_t *allocator, node_t *node)
{
    node_t *next;
    apr_uint32_t index, max_index;

    if (allocator->lock)
        apr_lock_acquire(allocator->lock);

> This reminds me that we should have something like 
> apr_lock_acquire_opt(foo) which will not do anything if the passed in 
> parameter is NULL.  This would seriously cut down on the number of
> places where this if (lock) lock_acquire syntax occurrs.  Someone
> brought this up in a commit log recently.  It's a good point.  Hell, 
> even a #defined macro works well for this.  

APR_DECLARE(void *) apr_palloc(apr_pool_t *pool, apr_size_t size)
{
    node_t *active, *node;
    void *mem;
    char *endp;

    size = ALIGN_DEFAULT(size);
    active = pool->mactive;

    endp = active->first_avail + size;
    if (endp < active->endp) {
        mem = active->first_avail;
        active->first_avail = endp;
        return mem;
    }

> I think this is a race condition.  This is also present in the current
> pool code, so this isn't something that you introduced.  Consider the
> following scenario:
> Thread A has access to pool Z.
> Thread B has access to pool Z as well.
> Thread A calls apr_palloc, gets active, sees that there is enough
> space, sets mem.
> *CONTEXT SWITCH*
> Thread B now calls apr_palloc, gets active, sess that there is enough
> space, sets mem, updates first_avail, returns.
> *CONTEXT SWITCH*
> Thread A updates first_avail.  Oops.  =-)  However, I'd suggest an
> option in apr_create_pool_ex that says that this pool is NOT shared
> and doesn't need locking.  Otherwise, we must lock this case (not
> an allocator lock, but a pool lock).  This seems to be the only case 
> where this could happen.  Seems a waste, but it definitely could happen.
> This race doesn't matter in the apr_pool_clear or apr_pool_destroy
> since two children don't try to clear the pool at the same time.
>
> I know the code doesn't crash under high load, so I know it has
> a reasonable degree of quality and it is definitely easier to 
> understand.  Assuming the matters brought up above are addressed
> to my satisfaction, I'll give this a +1.  -- justin
>
> P.S. Oh, man.  This week's fun never ends.  My mail server that
> I receive email on had a full /var partition.  What is going on
> this week?  Well, I'm slowly getting queued-up emails now...I
> thought the list was awfully quiet.


Mime
View raw message