hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <>
Subject [jira] [Commented] (HIVE-6537) NullPointerException when loading hashtable for MapJoin directly
Date Tue, 04 Mar 2014 18:27:20 GMT


Hive QA commented on HIVE-6537:

{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5238 tests executed
*Failed tests:*

Test results:
Console output:

Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed

This message is automatically generated.


> NullPointerException when loading hashtable for MapJoin directly
> ----------------------------------------------------------------
>                 Key: HIVE-6537
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>         Attachments: HIVE-6537.01.patch, HIVE-6537.2.patch.txt, HIVE-6537.patch
> We see the following error:
> {noformat}
> 2014-02-20 23:33:15,743 FATAL [main] org.apache.hadoop.hive.ql.metadata.HiveException:
>         at
>         at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(
>         at org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(
>         at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(
>         at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(
>         at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(
>         at org.apache.hadoop.hive.ql.exec.MapOperator.process(
>         at
>         at
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(
>         at
>         at org.apache.hadoop.mapred.YarnChild$
>         at Method)
>         at
>         at
>         at org.apache.hadoop.mapred.YarnChild.main(
> Caused by: java.lang.NullPointerException
>         at java.util.Arrays.fill(
>         at
>         at
>         ... 15 more
> {noformat}
> It appears that the tables in Arrays.fill call is nulls. I don't really have full understanding
of this path, but what I gleaned so far is this...
> From what I see, tables would be set unconditionally in initializeOp of the sink, and
in no other place, so I assume for this code to ever  work that startForward calls it at least
some time.
> Here, it doesn't call it, so it's null. 
> Previous loop also uses tables, and should have NPE-d before fill was ever called; it
didn't, so I'd assume it never executed. 
> There's a little bit of inconsistency in the above code where directWorks are added to
parents unconditionally but sink is only added as child conditionally. I think it may be that
some of the direct works are not table scans; in fact given that loop never executes they
may be null (which is rather strange). 
> Regardless, it seems that the logic should be fixed, it may be the root cause

This message was sent by Atlassian JIRA

View raw message