hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Srinivas (JIRA)" <>
Subject [jira] [Commented] (HIVE-2907) Hive error when dropping a table with large number of partitions
Date Mon, 21 May 2012 17:44:41 GMT


Srinivas commented on HIVE-2907:

I downloaded the source code for Hive-0.9.1. However, it appears that "" is
missing the fix that fetches partition-metadata in batches. So, it can still cause issues
when trying to drop a table with a large number of partitions. 

Proposed fix in method dropTable from
 int partitionBatchSize = HiveConf.getIntVar(getConf(),

        // call dropPartition on each of the table's partitions to follow the
        // procedure for cleanly dropping partitions.
        List<MPartition> partsToDelete = listMPartitions(dbName, tableName, partitionBatchSize);
	while (true){
	  if (partsToDelete != null || partsToDelete.isEmpty()) {
	  for (MPartition mpart : partsToDelete) {
> Hive error when dropping a table with large number of partitions
> ----------------------------------------------------------------
>                 Key: HIVE-2907
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 0.9.0
>         Environment: General. Hive Metastore bug.
>            Reporter: Mousom Dhar Gupta
>            Assignee: Mousom Dhar Gupta
>            Priority: Minor
>             Fix For: 0.9.0
>         Attachments: HIVE-2907.1.patch.txt, HIVE-2907.2.patch.txt, HIVE-2907.3.patch.txt,
HIVE-2907.D2505.1.patch, HIVE-2907.D2505.2.patch, HIVE-2907.D2505.3.patch, HIVE-2907.D2505.4.patch,
HIVE-2907.D2505.5.patch, HIVE-2907.D2505.6.patch, HIVE-2907.D2505.7.patch
>   Original Estimate: 10h
>  Remaining Estimate: 10h
> Running into an "Out Of Memory" error when trying to drop a table with 128K partitions.
> The methods dropTable in metastore/src/java/org/apache/hadoop/hive/metastore/

> and dropTable in ql/src/java/org/apache/hadoop/hive/ql/exec/ encounter out
of memory errors 
> when dropping tables with lots of partitions because they try to load the metadata for
every partition into memory.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message