hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Namit Jain (JIRA)" <>
Subject [jira] Commented: (HIVE-1918) Add export/import facilities to the hive system
Date Wed, 09 Feb 2011 06:55:57 GMT


Namit Jain commented on HIVE-1918:

Reading from:

Importing into Existing Tables

This section describes factors to take into account when you import data into existing tables.

Manually Creating Tables Before Importing Data

When you choose to create tables manually before importing data into them from an export file,
you should use either the same table definition previously used or a compatible format. For
example, although you can increase the width of columns and change their order, you cannot
do the following:

Add NOT NULL columns
Change the datatype of a column to an incompatible datatype (LONG to NUMBER, for example)
Change the definition of object types used in a table
Change DEFAULT column values

When tables are manually created before data is imported, the CREATE TABLE statement in the
export dump file will fail because the table already exists. To avoid this failure and continue
loading data into the table, set the import parameter IGNORE=y. Otherwise, no data will be
loaded into the table because of the table creation error.

Do you want to support this ? Seems like a reasonable thing to have - currently, an error
is thrown during import
if the table already exists ?

> Add export/import facilities to the hive system
> -----------------------------------------------
>                 Key: HIVE-1918
>                 URL:
>             Project: Hive
>          Issue Type: New Feature
>          Components: Query Processor
>            Reporter: Krishna Kumar
>            Assignee: Krishna Kumar
>         Attachments: HIVE-1918.patch.1.txt, HIVE-1918.patch.2.txt, HIVE-1918.patch.3.txt,
HIVE-1918.patch.txt, hive-metastore-er.pdf
> This is an enhancement request to add export/import features to hive.
> With this language extension, the user can export the data of the table - which may be
located in different hdfs locations in case of a partitioned table - as well as the metadata
of the table into a specified output location. This output location can then be moved over
to another different hadoop/hive instance and imported there.  
> This should work independent of the source and target metastore dbms used; for instance,
between derby and mysql.
> For partitioned tables, the ability to export/import a subset of the partition must be
> Howl will add more features on top of this: The ability to create/use the exported data
even in the absence of hive, using MR or Pig. Please see
for these details.

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message