drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paul Rogers (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-5027) ExternalSortBatch can use excessive memory for very large queries
Date Thu, 10 Nov 2016 05:28:58 GMT

    [ https://issues.apache.org/jira/browse/DRILL-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15653085#comment-15653085
] 

Paul Rogers commented on DRILL-5027:
------------------------------------

Inspection reveals that the intermediate spill events do not clean up the previous generation
of spill files until the Drillbit exits.  That is, if we spill files (a, b, c, d) to produce
file e, the first four files remain on disk. In particular the {{mergeAndSpill}} method does
not call {{BatchGroup.close()}} for each file that is merged into a new spilled batch.

However, the batches are removed from the {{spilledBatchGroups}} list, so that the ESB cannot
remove them when the query completes.

The files are marked "delete on exit", so they are removed when the Drillbit exits.

> ExternalSortBatch can use excessive memory for very large queries
> -----------------------------------------------------------------
>
>                 Key: DRILL-5027
>                 URL: https://issues.apache.org/jira/browse/DRILL-5027
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.8.0
>            Reporter: Paul Rogers
>            Priority: Minor
>
> The {{ExternalSortBatch}} (ESB) operator sorts data while spilling to disk as needed
to operate within a memory budget.
> The sort happens in two phases:
> 1. Gather the incoming batches from the upstream operator, sort them, and spill to disk
as needed.
> 2. Merge the "runs" spilled in step 1.
> In most cases, the second step should run within the memory available for the first step
(which is why severity is only Minor). However, if the query is exceptionally large, then
the second step can use excessive memory.
> Here is why.
> * In step 1, we create a series of n spill files. Each file contains batches of some
maximum size, say 20 MB.
> * In step 2, we must simultaneously open all the spilled files and read the first batch
into memory in preparation for merging.
> Suppose that the query has 1 TB of data. Suppose that 10 GB of memory is available. The
result will be 1 TB / 1 GB = 1000 spill files. Suppose each batch in each file is 20 MB. A
single-pass merge will need 20 MB * 1000 = 20 GB to operate. But, we only have 10 GB.
> The typical solution is to perform the merge in multiple phases, with each phase reading
and respilling only enough runs that will fit in memory. In the above case, the first phase
would make two iterations, merging 500 GB in each using 10 GB of memory. The second phase
would make a single iteration to merge the two first phase files.
> The result is much disk I/O, and a requirement for sufficient disk spaces to store two
sets of spill files (one from phase i-1, another for phase i). But, the query will complete
within the memory budgeted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message