drill-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Victoria Markman (JIRA)" <j...@apache.org>
Subject [jira] [Created] (DRILL-3936) We don't handle out of memory condition during build phase of hash join
Date Wed, 14 Oct 2015 23:14:05 GMT
Victoria Markman created DRILL-3936:

             Summary: We don't handle out of memory condition during build phase of hash join
                 Key: DRILL-3936
                 URL: https://issues.apache.org/jira/browse/DRILL-3936
             Project: Apache Drill
          Issue Type: Bug
          Components: Execution - Relational Operators
            Reporter: Victoria Markman

It looks like we just fall through ( see excerpt from HashJoinBatch.java below )
  public void executeBuildPhase() throws SchemaChangeException, ClassTransformationException,
IOException {
    //Setup the underlying hash table

    // skip first batch if count is zero, as it may be an empty schema batch
    if (right.getRecordCount() == 0) {
      for (final VectorWrapper<?> w : right) {
      rightUpstream = next(right);

    boolean moreData = true;

    while (moreData) {
      switch (rightUpstream) {
      case OUT_OF_MEMORY:
      case NONE:
      case NOT_YET:
      case STOP:
        moreData = false;

We don't handle it later either:
  public IterOutcome innerNext() {
    try {
      /* If we are here for the first time, execute the build phase of the
       * hash join and setup the run time generated class for the probe side
      if (state == BatchState.FIRST) {
        // Build the hash table, using the build side record batches.
        //                IterOutcome next = next(HashJoinHelper.LEFT_INPUT, left);
        hashJoinProbe.setupHashJoinProbe(context, hyperContainer, left, left.getRecordCount(),
this, hashTable,
            hjHelper, joinType);

        // Update the hash table related stats for the operator

This message was sent by Atlassian JIRA

View raw message