hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Carl Steinbach (JIRA)" <>
Subject [jira] [Updated] (HIVE-3387) meta data file size exceeds limit
Date Sun, 09 Sep 2012 22:25:07 GMT


Carl Steinbach updated HIVE-3387:

       Resolution: Fixed
    Fix Version/s:     (was: 0.9.1)
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Navis!
> meta data file size exceeds limit
> ---------------------------------
>                 Key: HIVE-3387
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.7.1
>            Reporter: Alexander Alten-Lorenz
>            Assignee: Navis
>             Fix For: 0.10.0
>         Attachments: HIVE-3387.1.patch.txt
> The cause is certainly that we use an array list instead of a set structure in the split
locations API. Looks like a bug in Hive's CombineFileInputFormat.
> Reproduce:
> Set mapreduce.jobtracker.split.metainfo.maxsize=100000000 when submitting the Hive query.
Run a big hive query that write data into a partitioned table. Due to the large number of
splits, you encounter an exception on the job submitted to Hadoop and the exception said:
> meta data size exceeds 100000000.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message