apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Wulms <alex.wu...@scarlet.be>
Subject Re: dev Digest 26 Aug 2010 13:15:26 -0000 Issue 1836
Date Fri, 27 Aug 2010 17:48:48 GMT
On Tue, 24 Aug 2010 21:11:54 +0100
Nick Kew wrote:
> On Tue, 24 Aug 2010 20:55:26 +0200
> Alex Wulms <alex.wulms@scarlet.be> wrote:
>> > Hi,
>> > 
>> > I'm Alex. This is my first message to this mailing list. Some of you
>> > might already have seen some messages from me on the
>> > dev@httpd.apache.org mailing list.
> And from FOSDEM, unless I'm confusing you with someone?
Hi Nick, it's indeed me.
>> > Please feel free to include them in a future version of APR.
> Thanks.  Looks good on the 5-second glance.  Could well be worth
> adopting!  Have you benchmarked it against dbm or memcache, and
> would you think it a good socache backend?
I did not benchmark it against dbm or against memcache. I noticed in APR
some client code to connect to a memcached server and my first instinct
tells me that connecting through a socket (I assume) to another server
process gives a basic performance overhead, even if it is a local socket
through IPC (does memcache support this?, either way, you have at least
the cost of the context switching between the httpd request worker
process and the memcached process for each look-up) so I did not
investigate further into using the memcache code. In stead, I continued
on the shared-memory path that I had already entered before I had the
need for a hash table. Obviously I might have been completely wrong on
my assumption that memcached is a separate process, not integrated in
the httpd request worker process.

I'm not familiar enough with the socache backend to judge if this hash
table implementation is suitable for it. I believe the code might still
need some enhancement.

One important point is that the rmm-hash code is not 100% thread safe
and I have left it up to the client of the API to implement an
appropriate locking strategy in order to protect against potential
corruptions. It is a little bit the same approach as is used for example
in the Java collection framework. The client can usually be more fancy
about determining the size of the critical section and when to invoke
the locks. I have understood that invoking the (global) lock function
itself is a relatively expensive operation so in case of a tight loop
where I need to invoke the rmm-hash methods multiple times and only
execute a little bit of other code, I prefer to lock once before the
loop and unlock after the loop (in my client code) then having the
rmm-hash implementation perform (expensive?) lock/unlock operations per
iteration. But I don't know how well that plays with other usages like
for example in an socache backend.


> -- Nick Kew

View raw message