drill-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Parth Chandra <pchan...@maprtech.com>
Subject Re: Suspicious direct memory consumption when running queries concurrently
Date Mon, 27 Jul 2015 17:51:22 GMT
It is possible that the ownership of the memory chunks is being passed on
to different threads from the one's that are allocating and that is causing
the problem.

One of the issues in Netty was that if a memory allocation was passed from
one thread to another, the receiving thread would add the memory to its own
cache (even though the memory came from an arena associated with the first
thread) and the memory would not be released. This was addressed at some
point, but either that fix does not cover all cases, or we may have a
related issue that still needs to be addressed.

Do we have a trace of the arenas and their respective allocations between
two consecutive iterations?


On Mon, Jul 27, 2015 at 9:53 AM, Abdel Hakim Deneche <adeneche@maprtech.com>

> When running a set of, mostly window function, queries concurrently on a
> single drillbit with a 8GB max direct memory. We are seeing a continuous
> increase of direct memory allocation.
> We repeat the following steps multiple times:
> - we launch in "iteration" of tests that will run all queries in a random
> order, 10 queries at a time
> - after the iteration finishes, we wait for a couple of minute to give
> Drill time to release the memory being held by the finishing fragments
> Using Drill's memory logger ("drill.allocator") we were able to get
> snapshots of how memory was internally used by Netty, we only focused on
> the number of allocated chunks, if we take this number and multiply it by
> 16MB (netty's chunk size) we get approximately the same value reported by
> Drill's direct memory allocation.
> Here is a graph that shows the evolution of the number of allocated chunks
> on a 500 iterations run (I'm working on improving the plots) :
> http://bit.ly/1JL6Kp3
> In this specific case, after the first iteration Drill was allocating ~2GB
> of direct memory, this number kept rising after each iteration to ~6GB. We
> suspect this caused one of our previous runs to crash the JVM.
> If we only focus on the log lines between iterations (when Drill's memory
> usage is below 10MB) then all allocated chunks are at most 2% usage. At
> some point we end up with 288 nearly empty chunks, yet the next iteration
> will cause more chunks to be allocated!!!
> is this expected ?
> PS: I am running more tests and will update this thread with more
> informations.
> --
> Abdelhakim Deneche
> Software Engineer
>   <http://www.mapr.com/>
> Now Available - Free Hadoop On-Demand Training
> <
> http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available
> >

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message