incubator-directmemory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akash Ashok <thehellma...@gmail.com>
Subject Re: Memory Deallocation Issue
Date Thu, 20 Oct 2011 17:16:26 GMT
On Thu, Oct 20, 2011 at 3:13 PM, Ashish <paliwalashish@gmail.com> wrote:

> On Thu, Oct 20, 2011 at 2:32 PM, Akash Ashok <thehellmaker@gmail.com>
> wrote:
> > On Thu, Oct 20, 2011 at 2:13 PM, Ashish <paliwalashish@gmail.com> wrote:
> >
> >> On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
> >> <raffaele.p.guidi@gmail.com> wrote:
> >> > Gooood news! As with every not-so-well documented piece of software I
> >> should
> >> > have read the code before taking wrong assumptions (or at least take a
> >> look
> >> > at stackoverflow ;) ). I think we should ask our mentors to assign
> >> developer
> >> > rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie
> ;)
> >> >
> >> > Thanks,
> >> >    Raffaele
> >>
> >> I again have to disagree on this feature. Why would you need to have
> >> to deallocate memory? You should know how much you need.
> >> Its always better to have a contiguous memory allocated. It works
> >> well. Dynamically resizing will pose challenges and add to performance
> >> issues.
> >>
> >> From cache perspective, we are anyways clearing the element, making
> >> way for new element.
> >>
> >> IMHO, I see offheap as a big chunk of memory which is pre-allocated
> >> and the MemoryManager we have written should deal with all the object
> >> management. We can manage memory as chunks or realize maps in native
> >> memory, is all upto the design we choose.
> >>
> >> This is very much an importand feature I believe. Assume in production
> you
> > made the
> > wrong decision of how much memory was pre-allocated, then you shouldn't
> be
> > charged
> > with the penalty of being unable to use that memory right ? Even though
> this
> > is very expensive
> > the feature should be available but documented well enough, warning
> against
> > its use.
>
> Well as far as I handle Ops, this is not how things work.
> Ops goes through a detailed capacity planning. Even before Ops, things
> are tested in staging environment.
>
> So when I take caches to production, I always calculate
> 1. How many elements do I need to store
> 2. What's the average size of each element and extrapolate to how much
> memory is needed
> 3. What level to set the eviction and how much to evict
> 4. Do I need expiry
> 5. For read-only caches, don't want eviction to happen, so tune them
> accordingly
>
> Need less to mention the GC tuning part.
>
> Again this is not hard written rule. We all have our preferences &
> experiences.
>
> From an end user perspective, I want to use DirectMemory so
> a. need a stable release
> b. need some benchmark numbers
>
 +1 on these recommendations. We should never sacrifice on stability at any
cost.


>
> And I am not against this feature, so please go ahead and implement it :)
> I feel we can take an approach of benchmarking these and write
> recommendations to the wiki. wdyt?
>
> Sounds Cool
1. Benchmarking and Recommendations
2. Examples and How to
Quite crucial.

>
> > If you are concerned about memory fragmentation, It wouldn't lead to a
> lot
> > of fragmentaion
> > if we deallocate and re allocate contiguous blocks right. I am under the
> >  assumption that
> >  allocateDirect allocates contiguous blocks of memory.
>
> It tried to allocate. Say if I ask for 64G of direct memory, it shall
> try to allocate contiguous memory.
> Now if we allocate and deallocate, it may not be contiguous. As OS
> can't predict when next allotment request shall come in.
>
> Hmmmm. Something to think about :)

Cheers,
Akash A

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message