incubator-blur-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron McCurry <amccu...@gmail.com>
Subject Re: SlabAllocationCacheValueBufferPool thread-safe?
Date Thu, 21 Jul 2016 13:12:17 GMT
On Thu, Jul 21, 2016 at 1:55 AM, Ravikumar Govindarajan <
ravikumar.govindarajan@gmail.com> wrote:

> Just now saw BlockLocks code. It is documented to be thread-safe. Apologize
> for the trouble...
>
> Btw, a small nit. The below method is not returning true. Is that
> intentional?
>
>     boolean releaseIfValid(long address) {
>
>       if (address >= _address && address < _maxAddress) {
>
>         long offset = address - _address;
>
>         int index = (int) (offset / _chunkSize);
>
>         _locks.clear(index);
>
>       }
>
>       return false;
>
>     }
>

In my 30 second review I think you are right.  It should probably return
true.  However I want to alanyze what happens with the current code so I
can write a test that proves there is a problem (because there probably is)
and fix it.


>
> Also, I thought a background thread can attempt merging sparsely populated
> slabs into one single slab & release free-mem (in 128MB chunks) back to
> OS...
>

I think this is a good idea, I just didn't get to writing it.


>
> You think it could be beneficial or it would make it needlessly complex?
>

I think for dedicated servers is might be overkill, but for a mixed
workload environment (think docker containers and the like) it would be
useful.

Aaron


>
> On Wed, Jul 20, 2016 at 11:38 PM, Aaron McCurry <amccurry@gmail.com>
> wrote:
>
> > I don't think there is a race condition because the allocation occurs
> > atomically in the BlockLocks class.  Do see a problem?  Let me know.
> >
> > Aaron
> >
> > On Wed, Jul 20, 2016 at 9:19 AM, Ravikumar Govindarajan <
> > ravikumar.govindarajan@gmail.com> wrote:
> >
> > > I came across the following in SlabAllocationCacheValueBufferPool.java.
> > Is
> > > the below method thread-safe?
> > >
> > >  @Override
> > >
> > >   public CacheValue getCacheValue(int cacheBlockSize) {
> > >
> > >     validCacheBlockSize(cacheBlockSize);
> > >
> > >     int numberOfChunks = getNumberOfChunks(cacheBlockSize);
> > >
> > >     ...
> > >
> > >    }
> > >
> > >
> > > It does allocation in a tight-loop using BlockLocks, Slab & Chunks. Is
> > > there a race-condition where 2 threads can pick same slab & chunk?
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message