hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Krishna Kumar (JIRA)" <>
Subject [jira] Commented: (HIVE-1918) Add export/import facilities to the hive system
Date Wed, 09 Feb 2011 08:03:57 GMT


Krishna Kumar commented on HIVE-1918:

Importing into existing tables is now supported, but the checks (to see whether the imported
table and the target table are compatible) have been kept fairly simple for now. Please see
ImportSemanticAnalyzer.checkTable. The schemas (column and partition) of the two should match
exactly, except for comments. Since we are just moving files (rather than rewriting records),
I think there will be issues if the metadata schema does not match (in terms of types, number
etc) the data serialization exactly.

Re the earlier comment re outputs/inputs, got what you meant. I will add the table/partition
to the inputs in exportsemanticanalyzer. But in the case of the imports, I see that the tasks
themselves adds the entity operated upon to the inputs/outputs list. Isn't that too late for
authorization/concurrency, even though it may work for replication. Or both the sem.analyzers
and the tasks are expected to add them? In the case of newly created table/partition, the
sem.analyzer does not have a handle ?

> Add export/import facilities to the hive system
> -----------------------------------------------
>                 Key: HIVE-1918
>                 URL:
>             Project: Hive
>          Issue Type: New Feature
>          Components: Query Processor
>            Reporter: Krishna Kumar
>            Assignee: Krishna Kumar
>         Attachments: HIVE-1918.patch.1.txt, HIVE-1918.patch.2.txt, HIVE-1918.patch.3.txt,
HIVE-1918.patch.txt, hive-metastore-er.pdf
> This is an enhancement request to add export/import features to hive.
> With this language extension, the user can export the data of the table - which may be
located in different hdfs locations in case of a partitioned table - as well as the metadata
of the table into a specified output location. This output location can then be moved over
to another different hadoop/hive instance and imported there.  
> This should work independent of the source and target metastore dbms used; for instance,
between derby and mysql.
> For partitioned tables, the ability to export/import a subset of the partition must be
> Howl will add more features on top of this: The ability to create/use the exported data
even in the absence of hive, using MR or Pig. Please see
for these details.

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message