drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-5512) Standardize error handling in ScanBatch
Date Fri, 19 May 2017 22:58:04 GMT

    [ https://issues.apache.org/jira/browse/DRILL-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16018144#comment-16018144
] 

ASF GitHub Bot commented on DRILL-5512:
---------------------------------------

Github user sudheeshkatkam commented on a diff in the pull request:

    https://github.com/apache/drill/pull/838#discussion_r117590009
  
    --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
---
    @@ -173,9 +174,8 @@ public IterOutcome next() {
     
             currentReader.allocate(mutator.fieldVectorMap());
           } catch (OutOfMemoryException e) {
    -        logger.debug("Caught Out of Memory Exception", e);
             clearFieldVectorMap();
    -        return IterOutcome.OUT_OF_MEMORY;
    +        throw UserException.memoryError(e).build(logger);
    --- End diff --
    
    I am not sure if this specific line change is required, so please correct me if I am wrong.
Thinking out loud..
    
    There are three places in ScanBatch where OutOfMemoryException is handled. Since OutOfMemoryException
is an unchecked exception, I could not quickly find all the calls which trigger the exception
in this method.
    
    The [first case](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L175)
and [second case](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L215)
are similar in that `reader.allocate(...)` fails. So although there is no unwind logic, seems
to me, this case is correctly handled as no records have been read, and so there is no need
to unwind. Say this triggers spilling in sort, then the query could complete successfully,
if allocate succeeds next time (and so on). Am I following this logic correctly?
    
    But this does not seems to be case, as [TestOutOfMemoryOutcome](https://github.com/apache/drill/blob/master/exec/java-exec/src/test/java/org/apache/drill/TestOutOfMemoryOutcome.java#L65)
triggers an OutOfMemoryException during ["next" allocation](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L172),
and all tests are expected to fail.
    
    And then, there is the [third case](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L247),
which is a general catch (e.g.`reader.next()` throws OutOfMemoryException). And as you mentioned,
readers cannot unwind, so that correctly fails the fragment.


> Standardize error handling in ScanBatch
> ---------------------------------------
>
>                 Key: DRILL-5512
>                 URL: https://issues.apache.org/jira/browse/DRILL-5512
>             Project: Apache Drill
>          Issue Type: Improvement
>    Affects Versions: 1.10.0
>            Reporter: Paul Rogers
>            Assignee: Paul Rogers
>            Priority: Minor
>              Labels: ready-to-commit
>             Fix For: 1.10.0
>
>
> ScanBatch is the Drill operator executor that handles most readers. Like most Drill operators,
it uses an ad-hoc set of error detection and reporting methods that evolved over Drill development.
> This ticket asks to standardize on error handling as outlined in DRILL-5083. This basically
means reporting all errors as a {{UserException}} rather than using the {{IterOutcome.STOP}}
return status or using the {{FragmentContext.fail()}} method.
> This work requires the new error codes introduced in DRILL-5511, and is a step toward
making readers aware of vector size limits.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message