drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-4884) Drill produced IOB exception while querying data of 65536 limitation using non batched reader
Date Tue, 25 Oct 2016 04:59:58 GMT

    [ https://issues.apache.org/jira/browse/DRILL-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604196#comment-15604196
] 

ASF GitHub Bot commented on DRILL-4884:
---------------------------------------

Github user jinfengni commented on a diff in the pull request:

    https://github.com/apache/drill/pull/584#discussion_r84831079
  
    --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/IteratorValidatorBatchIterator.java
---
    @@ -301,7 +301,7 @@ public IterOutcome next() {
                       "Incoming batch [#%d, %s] has an empty schema. This is not allowed.",
                       instNum, batchTypeName));
             }
    -        if (incoming.getRecordCount() > MAX_BATCH_SIZE) {
    +        if (incoming.getRecordCount() >= MAX_BATCH_SIZE) {
    --- End diff --
    
    Drill requires that batch with no selection vector(SV), and batch with SV2 is bounded
by 65536.  This requirement is valid across the entire Drill code base. What this IteratorVAlidator
tries to enforce is to make sure every incoming batch meet this requirement, when assertion
is enabled. However, it's each operator's responsibility to enforce this. For instance, as
you saw, each reader in Drill should produce a batch no larger than 65536. If you develop
a new storage plugin with a new reader, then the new reader should enforce this rule as well.

    
    Therefore, in your situation where you develop a new reader, the right approach is that
you need make sure reader produces batch no larger than this threshold.  
    
    The original code IteratorValidatorBatchIterator.java should be fine.  For the repo I
tried, I feel the fix should be in LimitRecordBatch.java. As you indicated earlier, the index
"i" is defined as char, which is not right.
    
    Would you like to modify your patch ?
    



> Drill produced IOB exception while querying data of 65536 limitation using non batched
reader
> ---------------------------------------------------------------------------------------------
>
>                 Key: DRILL-4884
>                 URL: https://issues.apache.org/jira/browse/DRILL-4884
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Query Planning & Optimization
>    Affects Versions: 1.8.0
>         Environment: CentOS 6.5 / JAVA 8
>            Reporter: Hongze Zhang
>            Assignee: Jinfeng Ni
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Drill produces IOB while using a non batched scanner and limiting SQL by 65536.
> SQL:
> {noformat}
> select id from xx limit 1 offset 65535
> {noformat}
> Result:
> {noformat}
> 	at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
~[classes/:na]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:324)
[classes/:na]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184)
[classes/:na]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290)
[classes/:na]
> 	at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [classes/:na]
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
> 	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> Caused by: java.lang.IndexOutOfBoundsException: index: 131072, length: 2 (expected: range(0,
131072))
> 	at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:175) ~[classes/:4.0.27.Final]
> 	at io.netty.buffer.DrillBuf.chk(DrillBuf.java:197) ~[classes/:4.0.27.Final]
> 	at io.netty.buffer.DrillBuf.setChar(DrillBuf.java:517) ~[classes/:4.0.27.Final]
> 	at org.apache.drill.exec.record.selection.SelectionVector2.setIndex(SelectionVector2.java:79)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.limitWithNoSV(LimitRecordBatch.java:167)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.doWork(LimitRecordBatch.java:145)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
~[classes/:na]
> 	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) ~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
~[classes/:na]
> 	at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) ~[classes/:na]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
~[classes/:na]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
~[classes/:na]
> 	at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_101]
> 	at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_101]
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
~[hadoop-common-2.7.1.jar:na]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
[classes/:na]
> 	... 4 common frames omitted
> {noformat}
> Code from IteratorValidatorBatchIterator.java said that it is OK incoming returning 65536
records:
> {noformat}
> if (incoming.getRecordCount() > MAX_BATCH_SIZE) { // MAX_BATCH_SIZE == 65536
>           throw new IllegalStateException(
>               String.format(
>                   "Incoming batch [#%d, %s] has size %d, which is beyond the"
>                   + " limit of %d",
>                   instNum, batchTypeName, incoming.getRecordCount(), MAX_BATCH_SIZE
>                   ));
>         }
> {noformat}
> Code from LimitRecordBatch.java shows that a loop will not break as expected when the
incoming returns 65536 records:
> {noformat}
>   private void limitWithNoSV(int recordCount) {
>     final int offset = Math.max(0, Math.min(recordCount - 1, recordsToSkip));
>     recordsToSkip -= offset;
>     int fetch;
>     if(noEndLimit) {
>       fetch = recordCount;
>     } else {
>       fetch = Math.min(recordCount, offset + recordsLeft);
>       recordsLeft -= Math.max(0, fetch - offset);
>     }
>     int svIndex = 0;
>     for(char i = (char) offset; i < fetch; svIndex++, i++) { // since fetch==recordCount==65536,
param i can be increased from 65535 to 65536, then be limited to 0 by the char type limitation,
the loop abnormally continues.
>       outgoingSv.setIndex(svIndex, i);
>     }
>     outgoingSv.setRecordCount(svIndex);
>   }
> {noformat}
> The IllegalStateException should be thrown when incoming returns 65535 records rather
than 65536.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message