apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ryan Bloom" <...@covalent.net>
Subject RE: Atomic operations
Date Mon, 11 Feb 2002 01:05:45 GMT
> Jeff Trawick wrote:
> > "Ryan Bloom" <rbb@covalent.net> writes:
> >
> >
> >>If we are on a platform that doesn't support atomic add/set, just do
> >>what Windows does.  Namely, have a global mutex that is set for any
> >>atomic add or set command.  The add/set is only atomic across other
> >>threads/process using the same add/set functions, but that should be
> >>good enough.
> >>
> >
> > not scalable...
> >
> > the app would be better off rolling their own support
> >
> >
> 
> 
> The way I was thinking of implementing the atomic to implement a
> 'apr_atomic_t' which is only 24bits wide (linux only supports 24 bits)
> 
> this would be mapped to the OS's atomic type if available. otherwise
it
> would be someting like
> 
> typedef struct {
> 	apr_lock_t lock; /* either a mutex or some home grown spinlock*/
> 	int value;
> }
> 
> which would introduce a apr_atomic_init/apr_atomic_term
> 
> I'll work on getting the 'generic' version up first, and
> make sure it works in a speedy way,
> 
> no my main question .. how do OS's handle multiple locks. I don't
think
> solaris has a issue with 1000's of locks created is there some kind of
> performance penalty in creating 100/1000 locks instead of using a
single
> lock?

Every lock that you create will use some system resource, either a file
descriptor, or a semaphore.  The more you create the fewer that are
available for other things.  On some platforms that may mean that you
need to tweak the kernel to get enough locks.  I am thinking of HPUX in
particular.

My opinion is that one lock is the only way to do this without using too
many resources to make it really work.  The real question, is what
platforms don't support some atomic operation already, and how prevalent
are they?  Even BeOS supports atomic operations, because we use them in
some of the APR code.

Ryan




Mime
View raw message