apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sander Striker" <stri...@apache.org>
Subject RE: Review of possible replacement for pools
Date Sat, 25 Aug 2001 11:45:21 GMT
Hi,

> Enclosed is my review of your posted patch.  Actually, I probably would
> like to see Brian's patch again to compare it as I think they achieve
> similar goals.
>
> The important aspect of this patch is that it makes the pool code
> easier to understand.  I'll say that this would need comments.
> I'll volunteer to add comments that if this patch is accepted.

Yes.  Just for clarity: I started with an empty file and copied source
or concept over from the original pools code.  Then I added some of
my personal sause and this is the result.  So, similarity with the
original pools code is no coincedence.

> Here goes - my comments are prefaced with a > at the beginning of
> the line (the reverse of normal as I'm too lazy to insert > at the
> beginning of the source)...
>
> -------------
> #define MIN_ALLOC 8192
> #define MAX_INDEX   32
>
> #define BOUNDARY_INDEX 12
> #define BOUNDARY_SIZE (1 << BOUNDARY_INDEX)
>
>> Okay so that people have a clue what is going on here, Sander has
>> a freelist that is arranged by size.  It is based on the power of
>> 2 that it is closest to, IIRC.  The 0 element is reserved for items
>> bigger than 2^32.  Sander can correct me if I'm misremembering this.

Every node is a multiple of BOUNDARY_SIZE, which is in this case 4096.
So, when requesting 9034 bytes, we alloc a node of 12288 bytes and
serve it from that.  The remainder is saved for the next allocation,
just like the original pools code.  The minimal size of a node is
MIN_ALLOC, which is set to 8192.
On the free list, the nodes are indexed.  Or, nodes up to a certain
size are indexed, I should say, the rest is dumped in the first slot.
The indexing takes place to up to BOUNDARY_SIZE * MAX_INDEX node
sizes.  To see how it works, take a close look at node_malloc() and
node_free().

> #define ALIGN_DEFAULT(size) ALIGN(size, 8)
>
>> Why 8 as the default?

A guessed fixed number.  No science involved.  If someone has a better
way of doing this _without_ introduction of '/' and '*' I'll be happy.
If 8 is bogus, please suggest a better number.

> static APR_INLINE node_t *node_malloc(allocator_t *allocator,
> apr_size_t size)
> {
> >   ...blah, blah, blah...search for it on the indexed freelist and
> >   return it...if none available, call malloc...at this point, we are
> >   setting up the newly allocated node.
>     node->next = NULL;
>     node->index = index;
>     node->first_avail = (char *)node + SIZEOF_NODE_T;
>     node->endp = (char *)node + size;
>
>> Problem A.  You are adding the SIZEOF_NODE_T here that is implicitly
>> included in the size parameter (most calls to node_malloc has size +
>> SIZEOF_NODE_T).  Either move this portion out (not sure how you
>> would do this), or have node_malloc add the space required for the
>> node_t and don't call node_malloc with + SIZEOF_NODE_T.  However, that
>> strategy does present a problem in apr_pool_create_ex where you call
>> node_malloc with MIN_ALLOC as the size rather than
>> MIN_ALLOC+SIZEOF_NODE_T.  So, in order to truly get MIN_ALLOC, you'd
>> have to ask for MIN_ALLOC-SIZEOF_NODE_T.  Hmm.  But, I don't like
>> this strategy where you *must* pass in an extra SIZEOF_NODE_T bytes.
>> (Yes, this function isn't user-callable, but still...)

I was facing the same, this is what I chose to use.
Other solutions welcomed.  I somewhat like the MIN_ALLOC - SIZEOF_NODE_T
better in apr_pool_create_ex.  I'll put this in a seperate patch, for
people to choose (or someone can come up with something better :).
Keep in mind that this is an internal function and a little line at the
top of the function explaining what you need to pass in would be
satisfactory
too IMHO.

> static void node_free(allocator_t *allocator, node_t *node)
> {
>     node_t *next;
>     apr_uint32_t index, max_index;
>
>     if (allocator->lock)
>         apr_lock_acquire(allocator->lock);
>
>> This reminds me that we should have something like
>> apr_lock_acquire_opt(foo) which will not do anything if the passed in
>> parameter is NULL.  This would seriously cut down on the number of
>> places where this if (lock) lock_acquire syntax occurrs.  Someone
>> brought this up in a commit log recently.  It's a good point.  Hell,
>> even a #defined macro works well for this.

I'll put a simple macro solution in this code as a seperate patch.
We might consider exporting such a macro in apr_lock.h.

> APR_DECLARE(void *) apr_palloc(apr_pool_t *pool, apr_size_t size)
> {
>     node_t *active, *node;
>     void *mem;
>     char *endp;
>
>     size = ALIGN_DEFAULT(size);
>     active = pool->mactive;
>
>     endp = active->first_avail + size;
>     if (endp < active->endp) {
>         mem = active->first_avail;
>         active->first_avail = endp;
>         return mem;
>     }
>
>> I think this is a race condition.  This is also present in the current
>> pool code, so this isn't something that you introduced.  Consider the
>> following scenario:
>> Thread A has access to pool Z.
>> Thread B has access to pool Z as well.
>> Thread A calls apr_palloc, gets active, sees that there is enough
>> space, sets mem.
>> *CONTEXT SWITCH*
>> Thread B now calls apr_palloc, gets active, sess that there is enough
>> space, sets mem, updates first_avail, returns.
>> *CONTEXT SWITCH*
>> Thread A updates first_avail.  Oops.  =-)  However, I'd suggest an
>> option in apr_create_pool_ex that says that this pool is NOT shared
>> and doesn't need locking.  Otherwise, we must lock this case (not
>> an allocator lock, but a pool lock).  This seems to be the only case
>> where this could happen.  Seems a waste, but it definitely could happen.
>> This race doesn't matter in the apr_pool_clear or apr_pool_destroy
>> since two children don't try to clear the pool at the same time.

This _is_ a race condition.  I reported this at the end of june on list.
Subject: Possible race condition in pools WAS: RE: Thoughts on locks.
At the end of the thread the conclusion was to fix it, most likely
through documentation.

Personally I think that locking on each allocation will kill performance.
Just never use a pool in two threads.

Btw, similar race conditions are present in the cleanup code in pools.
I haven't touched those.  Take a look and conclude that no 2 threads
should ever register a cleanup at the same time. Or I must be missing
something...

If there are going to be two locks (I doubt it, but it might happen),
the pool lock should be used when adding/removing child pools to a
pool.  At the moment I am using the allocator lock for that.

>> I know the code doesn't crash under high load, so I know it has
>> a reasonable degree of quality and it is definitely easier to
>> understand.  Assuming the matters brought up above are addressed
>> to my satisfaction, I'll give this a +1.  -- justin

Thanks for the review, patches to last version and new version (which
is the old version, with patches applied), attached.

Sander

Mime
View raw message